<>
There is no AI alignment problem. I encourage you to draw the inference that I’m not one of the smartest people you know. ? AGI is the eschatological leap of faith that a series of mottes will converge to a bailey. The big achievement of the AGI thought-experiment crowd (not AI practice) is to label their motte-and-bailey fallacy as a “moving the goalposts” lack of vision on the part of skeptics. It’s roughly like believing that building better and better airplanes will converge to a time machine or hyperspace drive. The adjective “general” does way too much work in a context where “generality” is a fraught matter.
I should add: I don’t believe humans are AGIs either. https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-3755072543989337&output=html&h=280&adk=3112109425&adf=4254231433&pi=t.aa~a.1695081512~i.15~rp.4&w=700&fwrn=4&fwrnh=100&lmt=1623921783&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=7152865607&psa=0&ad_type=text_image&format=700×280&url=https%3A%2F%2Fthreadreaderapp.com%2Fthread%2F1404688678568349705.html&flash=0&fwr=0&pra=3&rh=175&rw=700&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEI8IWmigYQq6nQleyvx_-HARI9ALHAwRQ7m2YoCyzz6wsCDjAZjCwY5yx_lyHyYZlhO5Istk0G5iIq9iSCkikkhijgySNQhIzCQGAB02D-TA&uach=WyJXaW5kb3dzIiwiMTAuMC4wIiwieDg2IiwiIiwiOTMuMC45NjEuNTIiLFtdLG51bGwsbnVsbCwiNjQiXQ..&dt=1632263215758&bpp=2&bdt=7237&idt=-M&shv=r20210916&mjsv=m202109200101&ptt=9&saldr=aa&abxe=1&cookie=ID%3D99c4c174cf07f209-227ee85b60ca00a2%3AT%3D1632263215%3ART%3D1632263215%3AS%3DALNI_MYbmoUyeQ654EhdsS-2tlO5WU7Wxw&prev_fmts=0x0%2C930x280&nras=2&correlator=4285791346955&frm=20&pv=1&ga_vid=2032087966.1632263215&ga_sid=1632263215&ga_hid=1395230686&ga_fc=0&u_tz=-420&u_his=2&u_java=0&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_nplug=3&u_nmime=4&adx=261&ady=1280&biw=1221&bih=958&scr_x=0&scr_y=0&eid=31062311&oid=3&pvsid=3808561964874520&pem=360&wsm=1&ref=https%3A%2F%2Fwww.bing.com%2F&eae=0&fc=1408&brdim=259%2C12%2C259%2C12%2C1920%2C0%2C1254%2C1046%2C1238%2C958&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=128&bc=31&jar=2021-09-09-20&ifi=3&uci=a!3&btvi=1&fsb=1&xpc=3jvLUQxUoU&p=https%3A//threadreaderapp.com&dtd=210In fact, I don’t think “AGI” is a well-posed concept at all. There are approximately turing complete systems, but assuming the generality of a UTM relative to the clean notion of computability is the same thing as generality of “intelligence” is invalid. At least the really silly foundation on IQ and psychometrics is withering away. I think the Bostrom style simulationist foundation is at least fun to think about though even sillier taken literally. But it highlights the connection to the hard problem of consciousness. I’ve been following this conversation since the beginning about 15 years ago, and I feel I need to re-declare my skepticism every few years, since it’s such a powerful attractor around these parts. Like periodically letting my extended religious family know I’m not religious. It’s interesting that the AGI ideology only appeared late into the AI winter, despite associated pop-tropes (Skynet etc) being around much longer. AGI is a bit like the philosopher’s stone of AI. It has sparked interesting developments just as alchemy did chemistry. Re: AI alignment in a more practical sense of deep learning biases and explainability, I’ve seen nothing new on the ethics front that is not a straightforward extrapolation of ordinary risk management and bureaucratic biases. The tech is interesting and new, as is the math. The ethics side is important, but so far nothing I’ve read there strikes me as important and new or particularly unique to AI systems. Treating it as such just drives a new obscurantism. https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-3755072543989337&output=html&h=280&adk=3112109425&adf=2368139542&pi=t.aa~a.1695081512~i.27~rp.4&w=700&fwrn=4&fwrnh=100&lmt=1623921783&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=7152865607&psa=0&ad_type=text_image&format=700×280&url=https%3A%2F%2Fthreadreaderapp.com%2Fthread%2F1404688678568349705.html&flash=0&fwr=0&pra=3&rh=175&rw=700&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEI8IWmigYQq6nQleyvx_-HARI9ALHAwRQ7m2YoCyzz6wsCDjAZjCwY5yx_lyHyYZlhO5Istk0G5iIq9iSCkikkhijgySNQhIzCQGAB02D-TA&uach=WyJXaW5kb3dzIiwiMTAuMC4wIiwieDg2IiwiIiwiOTMuMC45NjEuNTIiLFtdLG51bGwsbnVsbCwiNjQiXQ..&dt=1632263215758&bpp=3&bdt=7237&idt=-M&shv=r20210916&mjsv=m202109200101&ptt=9&saldr=aa&abxe=1&cookie=ID%3D99c4c174cf07f209-227ee85b60ca00a2%3AT%3D1632263215%3ART%3D1632263215%3AS%3DALNI_MYbmoUyeQ654EhdsS-2tlO5WU7Wxw&prev_fmts=0x0%2C930x280%2C700x280&nras=3&correlator=4285791346955&frm=20&pv=1&ga_vid=2032087966.1632263215&ga_sid=1632263215&ga_hid=1395230686&ga_fc=0&u_tz=-420&u_his=2&u_java=0&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_nplug=3&u_nmime=4&adx=261&ady=2310&biw=1221&bih=958&scr_x=0&scr_y=0&eid=31062311&oid=3&pvsid=3808561964874520&pem=360&wsm=1&ref=https%3A%2F%2Fwww.bing.com%2F&eae=0&fc=1408&brdim=259%2C12%2C259%2C12%2C1920%2C0%2C1254%2C1046%2C1238%2C958&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=128&bc=31&jar=2021-09-09-20&ifi=4&uci=a!4&btvi=2&fsb=1&xpc=ZNQvGZTSOu&p=https%3A//threadreaderapp.com&dtd=248Previous thread about this from February. Wish I’d indexed all my threads over the past 5-6 years to see if my positions have evolved or shifted.Unroll available on Thread Readerhttps://platform.twitter.com/embed/Tweet.html?creatorScreenName=vgr&dnt=true&embedId=twitter-widget-2&features=eyJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2hvcml6b25fdHdlZXRfZW1iZWRfOTU1NSI6eyJidWNrZXQiOiJodGUiLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3NwYWNlX2NhcmQiOnsiYnVja2V0Ijoib2ZmIiwidmVyc2lvbiI6bnVsbH0sInRmd192ZGxfY2hpcnBfMTI3OTQiOnsiYnVja2V0IjoidmRsX2FuZF9jaGlycCIsInZlcnNpb24iOjN9fQ%3D%3D&frame=false&hideCard=false&hideThread=true&id=1363965583860011015&lang=en&origin=https%3A%2F%2Fthreadreaderapp.com%2Fthread%2F1404688678568349705.html&sessionId=9d3c2c442d5f2829e309add40c56a535dc9b5e8b&theme=light&widgetsVersion=1890d59c%3A1627936082797&width=550pxBut to the original QT, a new point I’m adding here is the non-trivial observation that those who most believe in AGIs also happen to be convinced they are the smartest people around (and apparently manage to convince some around them, though Matt appears to be snarking). Circular like the anthropic principle. You notice that earth is optimized to sustain human life. Your first thought is, a God created this place just for us. Then you have the more sophisticated thought that if it weren’t Goldilocks optimal we wouldn’t be around to wonder why… But notice that the first thought posits a specific kind of extrapolation — an egocentric extrapolation. “God” is not a random construct but an extrapolation of an egocentric self-image as “cause” of Goldilocks zone.
The second thought makes it unnecessary to posit that. Flip it around to be teleological. In this case, a certain class of people do well in a pattern of civilization. If you assume that pattern is eternal, that class of people suggest evolution to an alluring god-like omega point and a worry that machines will get there first. https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-3755072543989337&output=html&h=280&adk=3112109425&adf=248048655&pi=t.aa~a.1695081512~i.37~rp.4&w=700&fwrn=4&fwrnh=100&lmt=1623921783&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=7152865607&psa=0&ad_type=text_image&format=700×280&url=https%3A%2F%2Fthreadreaderapp.com%2Fthread%2F1404688678568349705.html&flash=0&fwr=0&pra=3&rh=175&rw=700&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEI8IWmigYQq6nQleyvx_-HARI9ALHAwRQ7m2YoCyzz6wsCDjAZjCwY5yx_lyHyYZlhO5Istk0G5iIq9iSCkikkhijgySNQhIzCQGAB02D-TA&uach=WyJXaW5kb3dzIiwiMTAuMC4wIiwieDg2IiwiIiwiOTMuMC45NjEuNTIiLFtdLG51bGwsbnVsbCwiNjQiXQ..&dt=1632263215758&bpp=1&bdt=7237&idt=1&shv=r20210916&mjsv=m202109200101&ptt=9&saldr=aa&abxe=1&cookie=ID%3D99c4c174cf07f209-227ee85b60ca00a2%3AT%3D1632263215%3ART%3D1632263215%3AS%3DALNI_MYbmoUyeQ654EhdsS-2tlO5WU7Wxw&prev_fmts=0x0%2C930x280%2C700x280%2C700x280&nras=4&correlator=4285791346955&frm=20&pv=1&ga_vid=2032087966.1632263215&ga_sid=1632263215&ga_hid=1395230686&ga_fc=0&u_tz=-420&u_his=2&u_java=0&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_nplug=3&u_nmime=4&adx=261&ady=3387&biw=1221&bih=958&scr_x=0&scr_y=0&eid=31062311&oid=3&pvsid=3808561964874520&pem=360&wsm=1&ref=https%3A%2F%2Fwww.bing.com%2F&eae=0&fc=1408&brdim=259%2C12%2C259%2C12%2C1920%2C0%2C1254%2C1046%2C1238%2C958&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=128&bc=31&jar=2021-09-09-20&ifi=5&uci=a!5&btvi=3&fsb=1&xpc=X35C7OsS76&p=https%3A//threadreaderapp.com&dtd=255But as a skeptic, you wonder… if this civilization didn’t have this pattern, these people wouldn’t be around worrying about superintelligence. Some other group would be. Top dogs always fight imaginary gods. No accident that the same crowd is also most interested in living forever. A self-perpetuation drive shapes this entire thought space. This world is your oyster, you’re winning unreasonably easily and feel special. You want it to continue. You imagine going from temporarily successful human to permanently successful superhuman. Attribution bias helps pick out variables to extrapolate and competitors to fear. The alt explanation is less flattering. You’re a specialized being adapted to a specialized situation that is transient on a cosmic scale but longer than your lifespan. But it is easy and tempting to confuse a steady local co-evolution gradient (Flynn effect anyone?) for Destiny. I’m frankly waiting for a different kind of Singularity. One comparable to chemistry forking off from alchemy because it no longer needed the psychospiritual scaffolding of transmutation to gold or elixir of life to think about chemical reactions. I’m glad this subculture inspired a few talented people to build interesting bleeding edge systems at Open AI and Deep Mind. But the alchemy is starting to obscure the chemistry now. My own recent attempt to develop a “chemistry, not alchemy” perspective
Superhistory, Not SuperintelligenceArtificial Intelligence is really Artificial Timehttps://breakingsmart.substack.com/p/superhistory-not-superintelligenceAs usual I’ve been too wordy. This is the essential point. Clever people overestimating the importance of cleverness in the grand scheme of things.https://platform.twitter.com/embed/Tweet.html?creatorScreenName=vgr&dnt=true&embedId=twitter-widget-3&features=eyJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2hvcml6b25fdHdlZXRfZW1iZWRfOTU1NSI6eyJidWNrZXQiOiJodGUiLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3NwYWNlX2NhcmQiOnsiYnVja2V0Ijoib2ZmIiwidmVyc2lvbiI6bnVsbH0sInRmd192ZGxfY2hpcnBfMTI3OTQiOnsiYnVja2V0IjoidmRsX2FuZF9jaGlycCIsInZlcnNpb24iOjN9fQ%3D%3D&frame=false&hideCard=false&hideThread=true&id=1404897926577541135&lang=en&origin=https%3A%2F%2Fthreadreaderapp.com%2Fthread%2F1404688678568349705.html&sessionId=9d3c2c442d5f2829e309add40c56a535dc9b5e8b&theme=light&widgetsVersion=1890d59c%3A1627936082797&width=550pxThe many sad or unimpressive life stories of Guinness record IQ types illustrates that intelligence has diminishing returns even in our own environment. If you think you’d be 2x more successful if you were 2x smarter you might be disappointed. It’s a bit like me being good at 2x2s and worrying that somebody will discover the ultimate world-destroying 2×2. Except most strengths don’t tempt you into such conceits or narcissistic projections. Intelligence, physical strength, adversarial cunning, and beauty are among the few that do tempt people this way. Because they are totalizing aesthetic lenses on the world. When you have one of these hammers in your hand, everything looks like a nail.
AIOne of the increasingly prominent uses of data streams is its role in programming and refining artificial intelligence systems. Most artificial intelligence systems are relatively inert without input generated from human activity. Google Translate was generated by way of scraping millions of translations off of the Internet and other databases, and then processing these translations through sophisticated algorithms.And this pattern holds for virtually all other artificial intelligence systems. A giant act of statistics is made practically free because of Moore’s Law, but at core the act is based on the real work of people. We can’t tell how much of the success of an AI algorithm is due to people changing themselves to make it seem successful. This is just the big data way of stating the fundamental ambiguity of artificial intelligence. People have repeatedly proven adaptable enough to lower standards in order to make software seem smart. Efficiency is a synonym for how well a server is influencing the human world to align with its own model of the world.
A small example demonstrating a key similarity between AIs and bureaucracies. Both act mechanically to prioritize self-preservation as a primary objective, ahead of mission.
Pournelle’s iron law, drone edition.
Even Asimov recognized this, that’s why self-preservation was the 3rd and weakest law. Though an Asimovian 3-laws drone couldn’t be used for a military attack in the first place without training it to see your enemy as not human at all. Again similar to the design of bureaucracy.
The future of AI industry: “You program the 3 laws, I’ll program the human-recognition module”
<>My own initial reaction to gpt2 helps me understand why pre-modern tribes reacted to being filmed and shown the footage like the camera was stealing their souls. Except, I *like* the idea instead of being horrified by it. Unfortunately as untrue for AIs as for film cameras.
In fact, gpt2 has helped me clarify exactly why I think the moral/metaphysical panic around AI in general, and AGI in particular, is easily the silliest thing I’ll see in my lifetime. It’s the angels-on-a-pinhead concern of out time. Not even wrong.2.
AI won’t steal our soulsAI won’t terk ehr jerbsAI won’t create new kinds of riskAI isn’t A or I
I’d call it “cognitive optics”… it’s a bunch of lenses and mirrors that reflect, refract, and amplify human cognition. AIs think in the same way telescopes see. Ie they don’t.
“AIs reflect/reproduce our biases” misses the point by suggesting you could prevent that. That’s ALL they (deep learning algos) do. Biases are the building blocks. Take that out and there’s nothing left. Eg. gpt2 has picked up on my bias towards word like “embody” and “archetype”
Taking the bias out of AI is like taking the economy out of a planned economy. You’re left with regulators and planners with nothing left to regulate and plan. Or the lenses and mirrors out of a telescope. You’re left with a bunch of tubes with distortion-free 1x magnification.
Intension vs intention, mediocritization vs optimization, hard problem of consciousness vs mirrors and cameras
None of this is to say a useful and powerful new class of tools isn’t emerging. It definitely is. But you can’t just slap a sensor-actuator loop onto a tool and install a utility function into it and define that to be sentience. That’s just regular automation.
I’m tired of this so-called “debate”. It’s a spectacular nothingburger attempt to narrativize a natural, smooth evolution in computing power as a phase shift into a qualitatively different tech regime. There *are* real phase shifts underway, this just ain’t one of them.
Phase transitions in tech evolution do not usually have human-meaningful interpretations. When we went from steam power to electric, big changes resulted. That was a phase transition. But we didn’t decide to call electricity “artificial glucose” or motors “artificial muscles” ?
The output of gpt2 is at once deeply interesting and breathtakingly disappointing. I really was hoping to replace myself with a very small shell script but sadly that outcome ain’t on this vector of evolution, no matter how much compute you throw at it.
If you take out the breathless narratives, I honestly can’t tell how an apathetic AGI that squashes us on the way to paperclip maximization is any different from Australian wildfires or an asteroid headed at us. Yes it could destroy us. No there’s nothing “intelligent” about it.3.
Anthropomorphic design has good justifications (Asimov saw this in 1950). Designing swap-in functional replacements for ourselves cheaply evolves legacy infrastructure. Driverless cars are valuable because driver-cars are a big sunk cost. Without them, we’d automate differently.
But the possibility of anthropomorphic design shouldn’t lead us to reductively misread a tech evolution anthropomorphically (let alone anthropocentrically). We call it pottery, not container-statues, because molding clay ain’t about us. It’s about the properties of clay.
Key diff between AlphaGo zero and gpt2 worth mulling: AGZ discarded human training data and conquered Go playing against itself. That can’t happen with gpt2. Because there’s no competition rules or goals about language outside of closed world subsets like crosswords.
Now if Boston Dynamics robots evolved an internal language in the process of learning how to survive in an open environment that would at least be comparable to how I use language (in a survival closed loop in a social milieu)
But that example suggests the link between survival, intelligence and computation is much subtler. If you wanted to build tech that simply solved for survival in an open environment, you’re more likely to draw inspiration from bacteria than dogs or apes.
The only reason to cast silicon-based computation into human like form is a) replace ourselves cheaply in legacy infrastructure b) scare ourselves for no good reason
This is easy to see with arms and legs. Harder to see with mental limbs like human language. Asimov had this all figured out 50 years ago. The only reason AI is a “threat” is the benefits of anthropomorphic computation to some will outweigh their costs to many, which is fine.
Non-anthropomorphic computation otoh is not usefully illuminated by unmotivated comparisons to human capability.
AlphaGo can beat us at Go
A steam engine can outrun us
Same story. More geewhiz essentialism that’s it.
4.Same steady assault on human egocentricity that’s been going on since copernicus.
Not being the center is not the same as being “replaced.”
Risks are not the same as malevolence
Apathetic harm from a complex system is not the same as intelligence at work
“Intelligence” is a geocentrism kinda idea. The belief that “our” thoughts “revolve around” us. Like skies seem to revolve around earth.
“AI” is merely the ego-assaulting discovery that intelligence is just an illusion caused by low-entropy computation flows passing through us.
What annoys me about “AI” understandings of statistical algorithms is that it obscures genuinely fascinating questions about computation.
For example it appears any Universal Turing Machine (UTM) can recover the state of any other UTM given enough sample output and memory.
This strikes me as more analogous to a heat engine locally reversing entropy than “intelligence”. But nobody studies things like gpt2 in such terms. Can we draw a Carnot cycle type diagram for it? What’s the efficiency possible?
The tedious anthropocentric lens (technically the aspie-hedgehog-rationalist projective lens) stifles other creative perspectives because of the appeal of angels-on-a-pinhead bs thought experiments like simulationism. Heat engines, swarms, black holes, fluid flows…
Most AI watchers recognize that the economy and complex bureaucratic orgs are also AIs in the same ontological sense as the silicon based ones, but we don’t see the same moral panic there. When in fact both have even gone through paperclip-maximizer type phases. Why?
I’ll tell you why. Because they don’t lend themselves as easily to anthropomorphic projection or be recognizably deployed into contests like beating humans at Go.
Markets beat humans at Go via prizes.
Bureaucracies do it via medals and training.