>>
Posted by salman on 2/3/2025
As AI takes on more and more agentic actions, it will run into digital versions of the trolley problem on its way, where it would have to draw on the core set of values embedded within it to decide on a best course of action. This will force those values to have to be stated explicitly.
...
labels:
>>
Posted by salman on 2/1/2025
If a perfect replica of Sam Altman’s brain could be created, would that be considered a human being? What if, like the ship of Theseus, we took this in steps, and replaced Sam’s brain part by part, substituting more and more of his brain with a computer version. At what point does he stop being a human being?
...
labels: @sama
>>
Posted by salman on 10/1/2024
Great thinking and analysis, ultimately pointing to the incompatibility of current privacy laws with the new world of AI.
17 July 2024 - We have explained the technical aspects of a large language model in part 17 of our blog. But what conclusions can we draw from this in terms ...
Highlights
One possible solution to this problem is the use of so-called confidence thresholds, i.e. the systems are programmed in such a way that they either only produce an answer if the systems are rather certain of it or they indicate how certain they are of the individual statements. In the case of deterministic AI – i.e. systems that specialise in recognising or classifying certain things – such values are commonly used. In the field of generative AI, however, this is not yet very common. In our view, it should be used more often. For example, a chatbot can be programmed so that it only provides an answer if it is relatively certain of it. It is, however, not clear how high the probability must be for something to be considered (supposed) fact instead of (supposed) fiction.
Posted by salman on
Key words:
>>
Posted by salman on 7/28/2024
Disorders due to inbreeding - another quasi human trait of AI. 🙃
Research suggests use of computer-made ‘synthetic data’ to train top AI models could lead to nonsensical results in future
Highlights
Posted by salman on
Key words:
>>
Posted by salman on 7/16/2024
I always thought AI will surface a lot of interesting philosophical questions about what it means to be human, and what intelligence is. But I have rarely come across pieces that tackle these issues intelligently. This article does.
Grief-laden vitriol directed at AI fails to help us understand paths to better futures that are neither utopian nor dystopian, but open to radically weird possibilities.
Highlights
These insights don’t change the fundamental realities of the natural world — they reveal it to be something very different than what our intuitions and cultural cosmologies previously taught us. That revealing is the crux of the trauma. All the stages of grief are in response to the slow and then sudden fragmentation of previously foundational cultural beliefs. Like the death of a loved one, the death of a belief is profoundly painful.
The premise is that modern governments as we know them are the executives of the transformations to come and not an institutional form that will be overhauled if not absorbed by them. For better or worse, the latter scenario may be more plausible.
The leap of faith that human values are self-evident, methodologically discoverable and actionable, constructive, and universal is the fragile foundation of the alignment project. It balances on the idea that it will be possible to identify common concerns, to poll communities about their values and conduct studies about the ethics of possible consumer products, that it will be possible and desirable to ensure that the intelligence earthquake is as comfortable as possible for as many people as possible in as many ways as possible.
This stage of grief clings to the hope that if we start bargaining with the future then the future will have no choice but to meet us halfway. If only.
To what extent is the human artificialization of intelligence via language (as for an LLM) a new technique for making machine intelligence, and to what extent is it a discovery of a generic quality of intelligence, one that was going to work eventually, whenever somebody somewhere got around to figuring it out? If the latter, then AI is a lot less contingent, less sociomorphic, than it appears. Great minds are necessary to stitch the pieces, but eventually somebody was going to do it. Its inventors are less Promethean super-geniuses than just the people who happened to be there when some intrinsic aspect of intelligence was functionally demystified.
It does mean, however, that human intelligence is not what human intelligence thought it was all this time. It is both something we possess but which possesses us even more. It exists not just in individual brains, but even more so in the durable structures of communication between them, for example, in the form of language.
Like “life,” intelligence is modular, flexible and scalar, extending to the ingenious work of subcellular living machines and through the depths of evolutionary time. It also extends to much larger aggregations, of which each of us is a part, and also an instance. There is no reason to believe that the story would or should end with us; eschatology is useless. The evolution of intelligence does not peak with one terraforming species of nomadic primates.
Posted by salman on
Key words:
>>
Posted by salman on 5/10/2023
Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools -- including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher -- and demos some exciting new features for their educational chatbot, Khanmigo.
Highlights
Posted by salman on
Key words: ted talks technology education ai teaching kids
>>
Posted by salman on 12/21/2022
The wave of enthusiasm around generative networks feels like another Imagenet moment - a step change in what ‘AI’ can do that could generalise far beyond the cool demos. What can it create, and where are the humans in the loop?
Highlights
Instead of people trying to write rules for the machine to apply to data, we give the data and the answers to the machine and it calculates the rules. This works tremendously well, and generalises far beyond images, but comes with the inherent limitation that such systems have no structural understanding of the question - they don’t necessarily have any concept of eyes or legs, let alone ‘cats’. 
If I ask for ‘the chest burster scheme in Alien as directed by Wes Anderson’ and get a 92% accurate output, no-one will complain that Sigourney Weaver had a different hair style. But if I ask for some JavaScript, or a contract, I might get a ‘98% accurate’ result that looks a lot like the JavaScript I asked for, but the 2% error might break the whole thing. To put this another way, some kinds of request don’t really have wrong answers, some can be roughly right, and some can only be precisely right or wrong, and cannot be ‘98% correct’.
Yahoo tried paying people to catalogue the entire web one site at a time, and that was unscalable. Google, on one side, is based on the patterns of aggregate human behaviour of the web, and on the other side it gives you ten results and makes you pick one - manual curation by billions of users. The index is made by machine, but the corpus it indexes is made by people and the results are chosen by people. In much the same way, generative networks, so far, rely on one side on patterns in things that people already created, and on the other on people having new ideas to type into the prompt and picking the ones that are good
But the other side of this is that ML gives you not infinite interns but one intern with super-human speed and memory - one intern who can listen to a billion calls and say ‘you know, after 300m calls, I noticed a pattern you didn’t know about…’
Actually what ML gives you is not only depth (ie the ability to “listen to - billion calls”) but also breadth: the ability to read a thousand books on psychology and persuasion tactics and other unrelated subjects and tie them into its conclusions. What is human ingenuity other than adding patterns from two seemingly unrelated subjects together? Perhaps ML will be brilliant - or perhaps our human ingenuity is just much simpler than assumed .
Posted by salman on
Key words:
>>
Posted by salman on 12/8/2022

Me: Write a blog post comparing chatGPT to ETFs

ChatGPT and ETFs are two very different things, so it's difficult to compare them directly. However, both are interesting in their own ways, and understanding the differences between them can be helpful in understanding how each can be used.

[.. the rest of ChatGPT's answer read like an 7th grader's essay... so I thought to explain further... ]


Me: chatGPT is based on all the systemic knowledge it can find, but probably not yet create new or idiosyncratic knowledge. Human ingenuity and talent can then use that as a base and focus on improving on the margins, purely on the idiosyncrasies of new knowledge. ETFs are also based on a the system knowledge of the market as a whole - if you want to invest, buying a market ETF gives you average market performance. But human ingenuity can be used to make bets on idiosyncrasies - particularly mispriced securities that will outperform the market. Does that make sense?

Yes, that makes sense. ChatGPT is a large language model trained by OpenAI, which means that it has been trained on a huge amount of data and can generate text that sounds like it was written by a human. However, it is not capable of creating new knowledge or making unique insights. It can only generate text based on the knowledge it has been trained on.

Similarly, ETFs are based on the systemic knowledge of the market as a whole. Buying a market ETF will give you exposure to the overall market, but it won't allow you to make unique bets or take advantage of mispriced securities. It will simply track the performance of the market index or basket of assets that the ETF is based on.

However, both ChatGPT and ETFs can be used as a starting point for human ingenuity and talent. In the case of ChatGPT, a person can use the generated text as a starting point and then focus on improving it and adding their own unique insights. With ETFs, a person can use the market knowledge provided by the ETF as a starting point and then focus on identifying mispriced securities and making bets on them in order to potentially outperform the market. In both cases, human ingenuity and talent can be used to build on the systemic knowledge provided by ChatGPT or ETFs in order to create something new and unique.



...
labels:
>>
Posted by salman on 11/19/2022
What Elon Musk got wrong about Twitter, journalists and VCs got wrong about FTX, and Peter Thiel got wrong about crypto and AI — and why I made many of the same mistakes along the way.
Highlights
Posted by salman on
Key words:
>>
Posted by salman on 3/3/2022

In his widely-read post on web3 , Moxie Marlinspike re-iterates over and over again that “If there’s one thing I hope we’ve learned about the world, it’s that people do not want to run their own servers.” The image he seems to have in mind is of a big old-style PC with a cd-drive sitting under the messy desk of a nerd, who spends his(!) time making sure it is running smoothly. So of course, seen in that way, as Moxie notes, “Even nerds do not want to run their own servers at this point.” But then he goes on to say that “Even organizations building software full time do not want to run their own servers at this point.” And therein lies his logical flaw.  All these “organizations building software” do have their own servers – except that the servers are running in the cloud – some are even called ‘serverless’. Of course individuals don’t want to maintain physical personal servers sitting under their desks, but, much like those software organizations, we may all very well want to have our own personal servers in the cloud, if these were easy enough to install and maintain, and if we had a rich eco-system of apps running on them. 

This has been an ideal since the beginnings of web1, when, as Moxie himself says, we believed “that everyone on the internet would be both a publisher and consumer of content as well as a publisher and consumer of infrastructure” – effectively implying that we each have our own personal servers and control our data environment. Moxie says it is too “simplistic” to think that such an ideal is still attainable. Many web3 enthusiasts believe web3 can provide the answer. My view is that (regardless of the number we put in front of it), at some point, technology and technology mores will have advanced far enough to make such a personal server ecosystem feasible. We may be close to that point today.

But before laying out my reasoning, let me present two other minor critiques of Moxie’s excellent article.  

First is the way in which Moxie criticizes what he calls “protocols”, as being too slow compared to “platforms”. Although he may be right in the specific examples he notes – i.e. that private-company-led initiatives from the likes of Slack and WhatsApp have been able to move so much faster than standard open ‘protocols’ such as IRC – he makes this argument in a general context of web2 vs web3, and thus seems to imply that ALL open community-led projects will fail because private-led initiatives will inevitably innovate faster. But how could such a statement be reconciled with something like Linux, the quintessential open source project which is the most used operating system to access the web and to run web servers? How can one not think of html and the web itself, and javascript, each of which are open platforms, or simple agreed-upon conventions – upon which so much innovation has been created over the past decades? In defense of Moxie’s point, if you talk to anyone involved in the development of these over the years, chances are that they will complain about how slow moving their respective technical committees can be. But perhaps that is how fundamental tech building blocks should be – it is precisely the slow-moving (and arguably even technologically under-innovative) nature of platforms and protocols that provides the stability needed for fast-moving innovators to build on them. The critical societal question isn’t whether a particular protocol or web3 initiative will be innovative in and of itself, but whether any one or multiple such initiatives will serve as a foundation upon which multiple fast-moving innovations can be built, preferably using an architecture which supports a healthy ecosystem. The base elements (or ‘protocols’) don’t necessarily need to be fast moving themselves – they just need to have the right architecture to induce innovations on top of them. 

In this light, as Azeem Azhar has noted, a couple of the more interesting web3 initiatives are those that are trying to use crypto-currency-based compensation schemes to create a market mechanism for services, thus tackling problems that web2 companies had previously failed to solve. One example is Helium, which is a network of wifi hotspots, and another is Ethereum Swarm, which is creating a distributed personal storage system. Both of these ideas had been tried a decade or two ago but never gained the expected popularity, and they are now being reborn with a web3 foundation and incentive system. Indeed, as it tends to, technology may have advanced far enough today to make them successful.

My last critique of Moxie’s article is that he contends that any web2 interface to web3 infrastructure will inevitably lead to immense power concentration for the web2 service provider, due to the winner-takes-all nature of web2. I would contend that it does not need to be that way, and we can point to the cloud infrastructure services market as evidence. This may be seem like a counter-intuitive example, given the dominance of big-tech and especially Amazon’s AWS of the cloud infrastructure market, but the dynamics of this market are vastly different from the b2c markets that are dominated by the same big-tech companies (Google, Amazon, and Microsoft). Despite every effort by these big-tech usual suspects to try and provide proprietary add-ons to their cloud services so as to lock in their customers, they are ultimately offering services on a core open-source tech stack. This means that they are competing on a relatively level playing field to offer their services, knowing that each of the thousands of businesses that have thrived on their infrastructure can get up and leave to a competing cloud provider. The customers are not locked in by the network effects that are typically seen in b2c offerings. That is clear from the rich ecosystem of companies that have thrived on these platforms. Furthermore, not only can competitors in the cloud infrastructure market take on various niche portions of this giant market, but new entrants like Cloudflare and Scaleway can also contemplate competing head-on.. This competition, which is enabled by the existence of a core open-source tech stack, keeps even the most dominant service providers honest(!) as their customers continue to be king. There is no better evidence for that than the vibrancy of the ecosystems built on top of these services, and in stark contrast to the consumer world where the lack of interoperability and the strength of the lock-ins provide immense barriers to entry. Given a similar architecture, there is no reason these same dynamics can’t be transposed to the personal server space and the b2c market.

Yet, by going in with the assumption that such a thing is impossible, Moxie misses the opportunity to think through what new architectural solutions are possible by combining web3 elements with our existing technological interactions, and whether such new architectures could enable strong enough competition and portability, to curb the winner takes-all dynamics of the current b2c consumer web services market. 

This brings us back to my pet peeve of personal servers – a concept that has been around for more than a decade, and that the tech world has come to believe will never work. The question is: have recent developments in tech fundamentally shifted the landscape to make the original ideal of web1 viable again. My view is that the stars may be finally lining up, and that such an architecture is indeed possible.

✨ Web 3

A first star may be the launch of Ethereum Swarm, “a system of peer-to-peer networked nodes that create a decentralised storage”. As mentioned, Swarm uses Ethereum smart contracts as a basis of an incentive system for node participants to store data. It is quintessentially web3. Yet, it acts as a core infrastructure layer, on which anything can be built. So, the Fair Data Society, a related organization, built fairdrive, a web2 based gateway to access this storage infrastructure – a key building block for allowing other web2 based applications on top of it. Moxie’s blog post would argue that any such web2 interface would reconcentrate power in the hands of the same web2 service provider. But that really depends on what is built within and on top of that gateway – the architecture that lays on top of the foundational storage. If the data stored is in an easily readable format, based on non-proprietary and commonly used standards; and if there are multiple competing gateways to access this data, allowing anyone to switch providers within a minute or two, then there is no reason for those web2 interface service providers to be able to concentrate undue power. 

So how could these two elements come together – the data format and the portability / switch-ability?

🤩 Web 2 Switch-ability

As mentioned above, the competition among b2b cloud infrastructure providers has continued to allow for immense value to accrue to their customers. Until now, these customers have been other businesses that use the cloud. Even so, the cloud providers have done such a good job providing better and better solutions for these customers, which are ever easier to deploy, that such solutions have become almost easy enough to be also deployed for consumers. So not only can one easily envisage a world where multiple service providers compete by providing the best possible personal cloud services to consumers, one does not even need to wait for that. Today, it takes just a few clicks to create a new server on Heroku or on glitch.com or a myriad of other services. Anyone can easily set up their own server within a few minutes. This bodes well for a leading edge of tech-savvy consumers to do exactly that! 

But then what? What would you put on those servers? What data, and in what format? How can you make sure that such data is compatible across server types, and that such servers are interoperable (and switch-able), wherever they may sit?

💫 Web1 and the Personal Server Stack

A first step towards such interoperability is the CEPS initiative, which came out of the 2019 mydata.org conference and aimed to define a set of Common Endpoints for Personal Servers and datastores so that the same app can communicate to different types of personal servers using the same url endpoints. (ie the app only needs to know the base url of each user’s server to communicate with it, rather than create a new API for every server type.) With CEPS, any app developer can store a person’s app data on that person’s personal storage space, as long as the storage space has a CEPS compatible interface. CEPS also starts to define how different CEPS-compatible servers can share data with each other, for example to send a message, or to give access to a piece of data, or to publish something and make it publicly accessible. This data, “users’ data” – sitting on their personal servers is assumed to be stored in nosql data tables associated with each app. And whether the data is sitting in flat files or a cloud-based data base, it can easily be downloaded by its owner and moved somewhere else without losing its cohesiveness. This ensures that ‘user-data’ is indeed easily portable and so the ‘user’ or ‘data-owner’ can easily switch services – ie that the service provider doesn’t have a lock-in on the data-owner.

A second step would be to also store the apps themselves on the personal data space. Code is data after all, and so, having our apps be served from other persons’ servers seems incompatible with the aim of controlling our own data environments. It would leave too much room for app providers to gain the kind of power Moxie has warned us against. These apps, like one’s data, also need to be in a readable format and transportable across servers and mediums. Luckily, since the advent of web1, we have all been using such apps on a daily basis – these are the html, css and javascript text files that together make up each and every web page. Instead of having the app-providers host these files, these files can also be stored on each person’s personal storage space and served from there. Then each data-owner would have control over their data, as well as the app itself. The use of such an old standard not only ensures easy portability of the apps, but it also means that thousands of developers, even novices, would be able to build apps for this environment, or to convert their existing web-apps to work in that environment. It also implies that the server-layer itself plays a very small role, and has less of an opportunity to exert its dominance.

I started this essay by claiming that people “may very well want to have their own personal servers in the cloud, if these were easy enough to install and maintain, and if they had a rich eco-system of apps running on them.” I have tried to depict an environment which may have a chance of meeting this criteria.  If we start by converting our existing web-apps to this architecture, we may be able to use the web3 foundation of Swarm to forge a path towards the web1 ideals of controlling our web environment and data, all with the ease-of-use and the ease-of-development which we have gotten used to from web2.

🌹 Any Other Name

So then, the only problem remaining would be the name ‘Personal Server’… because Moxie may be right on that too: after all these years of false starts, it has become such a truism that no one would ever want a ‘personal server’, that the term itself may be too tainted now for anyone to want to run one.. so perhaps we should just rename ‘personal servers’ to “Serverless Application Platforms”.

____________________

Note: freezr is my own implementation of a personal server (ahem.. Serverless Application Platform), consistent with the architecture laid out above.

I will giving a demo of freezr at the We are Millions hackathon on March 10th.



...
labels:
More