>>
Posted by salman on 2/3/2025
As AI takes on more and more agentic actions, it will run into digital versions of the trolley problem on its way, where it would have to draw on the core set of values embedded within it to decide on a best course of action. This will force those values to have to be stated explicitly.
...
labels:
>>
Posted by salman on 2/1/2025
If a perfect replica of Sam Altman’s brain could be created, would that be considered a human being? What if, like the ship of Theseus, we took this in steps, and replaced Sam’s brain part by part, substituting more and more of his brain with a computer version. At what point does he stop being a human being?
...
labels: @sama
>>
Posted by salman on 10/1/2024
Great thinking and analysis, ultimately pointing to the incompatibility of current privacy laws with the new world of AI.
17 July 2024 - We have explained the technical aspects of a large language model in part 17 of our blog. But what conclusions can we draw from this in terms ...
Highlights
One possible solution to this problem is the use of so-called confidence thresholds, i.e. the systems are programmed in such a way that they either only produce an answer if the systems are rather certain of it or they indicate how certain they are of the individual statements. In the case of deterministic AI – i.e. systems that specialise in recognising or classifying certain things – such values are commonly used. In the field of generative AI, however, this is not yet very common. In our view, it should be used more often. For example, a chatbot can be programmed so that it only provides an answer if it is relatively certain of it. It is, however, not clear how high the probability must be for something to be considered (supposed) fact instead of (supposed) fiction.
Posted by salman on
Key words:
>>
Posted by salman on 7/28/2024
Disorders due to inbreeding - another quasi human trait of AI. 🙃
Research suggests use of computer-made ‘synthetic data’ to train top AI models could lead to nonsensical results in future
Highlights
Posted by salman on
Key words:
>>
Posted by salman on 7/16/2024
I always thought AI will surface a lot of interesting philosophical questions about what it means to be human, and what intelligence is. But I have rarely come across pieces that tackle these issues intelligently. This article does.
Grief-laden vitriol directed at AI fails to help us understand paths to better futures that are neither utopian nor dystopian, but open to radically weird possibilities.
Highlights
These insights don’t change the fundamental realities of the natural world — they reveal it to be something very different than what our intuitions and cultural cosmologies previously taught us. That revealing is the crux of the trauma. All the stages of grief are in response to the slow and then sudden fragmentation of previously foundational cultural beliefs. Like the death of a loved one, the death of a belief is profoundly painful.
The premise is that modern governments as we know them are the executives of the transformations to come and not an institutional form that will be overhauled if not absorbed by them. For better or worse, the latter scenario may be more plausible.
The leap of faith that human values are self-evident, methodologically discoverable and actionable, constructive, and universal is the fragile foundation of the alignment project. It balances on the idea that it will be possible to identify common concerns, to poll communities about their values and conduct studies about the ethics of possible consumer products, that it will be possible and desirable to ensure that the intelligence earthquake is as comfortable as possible for as many people as possible in as many ways as possible.
This stage of grief clings to the hope that if we start bargaining with the future then the future will have no choice but to meet us halfway. If only.
To what extent is the human artificialization of intelligence via language (as for an LLM) a new technique for making machine intelligence, and to what extent is it a discovery of a generic quality of intelligence, one that was going to work eventually, whenever somebody somewhere got around to figuring it out? If the latter, then AI is a lot less contingent, less sociomorphic, than it appears. Great minds are necessary to stitch the pieces, but eventually somebody was going to do it. Its inventors are less Promethean super-geniuses than just the people who happened to be there when some intrinsic aspect of intelligence was functionally demystified.
It does mean, however, that human intelligence is not what human intelligence thought it was all this time. It is both something we possess but which possesses us even more. It exists not just in individual brains, but even more so in the durable structures of communication between them, for example, in the form of language.
Like “life,” intelligence is modular, flexible and scalar, extending to the ingenious work of subcellular living machines and through the depths of evolutionary time. It also extends to much larger aggregations, of which each of us is a part, and also an instance. There is no reason to believe that the story would or should end with us; eschatology is useless. The evolution of intelligence does not peak with one terraforming species of nomadic primates.
Posted by salman on
Key words:
>>
Posted by salman on 5/10/2023
Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools -- including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher -- and demos some exciting new features for their educational chatbot, Khanmigo.
Highlights
Posted by salman on
Key words: ted talks technology education ai teaching kids
>>
Posted by salman on 12/21/2022
The wave of enthusiasm around generative networks feels like another Imagenet moment - a step change in what ‘AI’ can do that could generalise far beyond the cool demos. What can it create, and where are the humans in the loop?
Highlights
Instead of people trying to write rules for the machine to apply to data, we give the data and the answers to the machine and it calculates the rules. This works tremendously well, and generalises far beyond images, but comes with the inherent limitation that such systems have no structural understanding of the question - they don’t necessarily have any concept of eyes or legs, let alone ‘cats’. 
If I ask for ‘the chest burster scheme in Alien as directed by Wes Anderson’ and get a 92% accurate output, no-one will complain that Sigourney Weaver had a different hair style. But if I ask for some JavaScript, or a contract, I might get a ‘98% accurate’ result that looks a lot like the JavaScript I asked for, but the 2% error might break the whole thing. To put this another way, some kinds of request don’t really have wrong answers, some can be roughly right, and some can only be precisely right or wrong, and cannot be ‘98% correct’.
Yahoo tried paying people to catalogue the entire web one site at a time, and that was unscalable. Google, on one side, is based on the patterns of aggregate human behaviour of the web, and on the other side it gives you ten results and makes you pick one - manual curation by billions of users. The index is made by machine, but the corpus it indexes is made by people and the results are chosen by people. In much the same way, generative networks, so far, rely on one side on patterns in things that people already created, and on the other on people having new ideas to type into the prompt and picking the ones that are good
But the other side of this is that ML gives you not infinite interns but one intern with super-human speed and memory - one intern who can listen to a billion calls and say ‘you know, after 300m calls, I noticed a pattern you didn’t know about…’
Actually what ML gives you is not only depth (ie the ability to “listen to - billion calls”) but also breadth: the ability to read a thousand books on psychology and persuasion tactics and other unrelated subjects and tie them into its conclusions. What is human ingenuity other than adding patterns from two seemingly unrelated subjects together? Perhaps ML will be brilliant - or perhaps our human ingenuity is just much simpler than assumed .
Posted by salman on
Key words:
>>
Posted by salman on 12/8/2022

Me: Write a blog post comparing chatGPT to ETFs

ChatGPT and ETFs are two very different things, so it's difficult to compare them directly. However, both are interesting in their own ways, and understanding the differences between them can be helpful in understanding how each can be used.

[.. the rest of ChatGPT's answer read like an 7th grader's essay... so I thought to explain further... ]


Me: chatGPT is based on all the systemic knowledge it can find, but probably not yet create new or idiosyncratic knowledge. Human ingenuity and talent can then use that as a base and focus on improving on the margins, purely on the idiosyncrasies of new knowledge. ETFs are also based on a the system knowledge of the market as a whole - if you want to invest, buying a market ETF gives you average market performance. But human ingenuity can be used to make bets on idiosyncrasies - particularly mispriced securities that will outperform the market. Does that make sense?

Yes, that makes sense. ChatGPT is a large language model trained by OpenAI, which means that it has been trained on a huge amount of data and can generate text that sounds like it was written by a human. However, it is not capable of creating new knowledge or making unique insights. It can only generate text based on the knowledge it has been trained on.

Similarly, ETFs are based on the systemic knowledge of the market as a whole. Buying a market ETF will give you exposure to the overall market, but it won't allow you to make unique bets or take advantage of mispriced securities. It will simply track the performance of the market index or basket of assets that the ETF is based on.

However, both ChatGPT and ETFs can be used as a starting point for human ingenuity and talent. In the case of ChatGPT, a person can use the generated text as a starting point and then focus on improving it and adding their own unique insights. With ETFs, a person can use the market knowledge provided by the ETF as a starting point and then focus on identifying mispriced securities and making bets on them in order to potentially outperform the market. In both cases, human ingenuity and talent can be used to build on the systemic knowledge provided by ChatGPT or ETFs in order to create something new and unique.



...
labels:
>>
Posted by salman on 11/19/2022
What Elon Musk got wrong about Twitter, journalists and VCs got wrong about FTX, and Peter Thiel got wrong about crypto and AI — and why I made many of the same mistakes along the way.
Highlights
Posted by salman on
Key words:
More