Killer AI and Counting GPUs
Writing rules is hard to begin with. Writing rules for a technology that can be a black box, is advancing at unprecedented speeds, and is potentially smarter than humans is harder.
Writing rules is hard to begin with. Writing rules for a technology that can be a black box, is advancing at unprecedented speeds, and is potentially smarter than humans is harder.
Regulation is hard, generally
Some examples of regulations and the impacts they have:
Cost of Compliance: I used to be an auditor at a big accounting firm. We loved regulation because it gave us all jobs and we charged our clients a lot of money to talk about who received the bank statements when they got mailed to company headquarters. Regulation also significantly raised the cost of being a public company because you had to get audits and do do other things like comply with the Sarbanes-Oxley Act.
Ability to Hide: Accounting regulation generally worked because if the regulation applied to you, it was because you announced your presence in the public markets. You then displayed your compliance pretty publicly. There was no hiding and if you didn’t make yourself plainly subject to the regulation than you didn’t need them.
Rules vs. Principles: Another wrinkle in accounting regulation is US GAAP was considered very “rules-based”, which meant companies might be compliant with the letter of the law, but not the spirit. In contrast, “Principles-Based” regulation, as is more prevalent internationally, takes a broader approach to achieving desired regulatory outcomes. This states the spirit of the law, then lets the courts interpret compliance.
Cat & Mouse Games: F1 is a fun example or rules-based approaches that get gamed by smarter players. Of course we all like to see cars go vroom, but there is also a 183-page and extremely dense technical regulation document with incredible precision on what teams can and can’t do with their cars. The intended effect is a fun racing product, but the way they go about it is with extremely specific rules about the maximum radius of concave curves and tail pipe diameters. This usually works in the first year of a regulation set, but then the teams get clever and figure out ways to make their own cars go faster while ruining something called “raceability”, which is the entire point of the sport.
Regulation Moats: In oil & gas, lots of bigger companies absolutely love regulation. They see it as a moat that keeps smaller competitors out since they aren’t able to meet the costs, demands, and challenges of strict regulation on things like safety and environmental protection. As a result, energy costs stay higher because there is less competition. Regulation is a competitive moat.
Intent vs. Capacity: In nuclear regulation, the knowledge to build weapons is pretty easily accessible. It’s definitely on the internet (but I’m not searching for it). However, the physical infrastructure to build those weapons is hard, expensive, and really visible. So, this is the target of lots of controls to keep nuclear weapons out of the hands of despots and Bond villains. Regulation is focused on capacity to do evil, not intent or know-how. If you’re Iran and North Korea, most everyone in the world thinks you may not use your nukes irresponsibly. We can’t prevent them from figuring out how to make the nukes, so instead, we just make sure they can’t have the great big machines required to do that really important step of going from blueprints to working warhead.
These are all examples of regulation and attempts to prevent really bad things from happening. They work in some ways, but also fail in others. They generally fail because the things they are regulating are either incredibly complex, or they are invisible, or they have unintended consequences. When something is complex (e.g. F1 and accounting rules) you will have really smart people figuring out loopholes. When something is invisible (e.g. nuclear proliferation), you will have bad actors ignoring regulation and doing evil things. When regulations have unintended consequences, such as prohibitive expense to comply (e.g. oil & gas or accounting), you will have a decrease in competition and undesired economic outcomes like expensive energy.
Regulating AI has a few dimensions
Regulating AI is a complex thing and there are multiple dimensions that require regulation.
Of course there is the existential threat of Killer AI. That’s a bad possibility, even if you think it’s a remotely remotest of remote possibilities. Doing nothing about this is like playing russian roulette with ChatGPT.
There are also more social problems like bias, fairness, and ethics. How do we make sure AI models don’t perpetuate or exacerbate existing social problems, or how do we ensure they don’t create new ones?
Then there are economic dimensions like who gets to get rich from ChatGPT. Many content creators are upset about the use of their works to train foundation models and don’t like how others are getting rich off of them while the creators feel their jobs are threatened. This is just one element of the divvying up the potential economic riches created by AI.
These are all the topics that get the most focus when regulation becomes a topic.
The regulatory solutions to these classes of problems are different too. The economic problems (e.g. copyright infringement and sharing the riches generated by AI) are very solvable problems. It is entirely possible that a court decision comes down tomorrow that completely settles the training-data-is-copyright-infringement debate. It’s just a matter of people deciding what is fair then implementing that. That’s not exactly easy, but it is also the kind of stuff that governments are purpose-built for.
The social problems are also pretty addressable with traditional regulatory approaches too. If an AI model is producing racist recommendations on who to hire and a company is blindly following them, that is a cut and dry case of discrimination under traditional regulatory frameworks because AI doesn't have personhood, so the people using the AI are ultimately responsible for the racist hiring practices. It’s no different from someone using an evil spreadsheet that provided a recommendation saying “don’t hire any blue people”. That spreadsheet is obviously evil, but it’s the fault and liability of the person relying on it. The spreadsheet isn’t to blame. We have laws for that, such as the Equal Employment Opportunity Act, that already get us most of the way there. All that is needed is probably some tweaks to existing laws to ensure culpable parties can’t hide behind reliance on an AI model.
In both of the social and economic cases, we’ve already decided what kind of outcomes we want; now we just need to figure out how we achieve them with AI as an added complication. It’s a question of how we sustain already-established values, not what those values or goals should be.
The existential threat of killer AIs is different. We don’t even know what we want as an outcome besides “not total annihilation”. Are we ok with super-smart but benevolent AIs who act as our friends? Is this even possible? Or do we want to limit the intelligence of AI to party tricks and not even bother with it?
Side Note: I try to avoid talking about the technicalities of Killer AI and Skynet-like scenarios because I’m not a data scientist and nobody really knows that this future will look like. However, I do think it’s possible enough and warrants talking about the risk and mitigations. This has real business implications as it impacts innovation and operating costs, just like a construction company has to spend a lot of many when they deal with explosives.
After we figure out the goal on ‘protecting us from total annihilation’, we can’t rely on regulation that is anything like the Equal Employment Opportunity Act or Sarbanes-Oxley Act to protect us. Those laws were meant to target common and frequent behaviors with relatively minor (sub-civilization scale) consequences. The Killer AI, annihilation risk is different. Handicappers vary widely on the probability of the “Evil AI Kills All Humans” scenario, but if it’s even a fraction of a percent of a possibility, then it’s the exact threat that Nassim Taleb has explained really well and warns against. A 1% chance (1 in 100) of a significant-but-not-total loss might be acceptable, but a 0.1% chance (1 in 1000) of total annihilation feels like a bad bet. We should probably think about that and figure out how to reduce that probability, regardless of where it is today. Regulation would be one of those ways.
Some people are talking about how to regulate AI
Elsewhere, Sam Altman testified on Capitol Hill yesterday and invited regulation to the AI space.
He proposed the creation of an agency that issues licenses for the creation of large-scale A.I. models, safety regulations and tests that A.I. models must pass before being released to the public.
The approach suggested here is mostly rules-based. It’s making sure the people are licensed and the company has all their paperwork in order. This all sounds well and good… if you are OpenAI. Compliance with this type of regulation will be really, really expensive. Companies who develop AI models will need teams of engineers just for testing models, lawyers for interpreting the results of tests, and compliance professionals to ensure all the paperwork is in order that will have to be filed with a regulator. No big deal of if you just got a $10B check from Microsoft. Spending $100M on compliance is just a cost of doing business. However, this is more problematic if you are a startup working on $1M in seed funding. This doesn’t even get into the complexity of how these goals would be accomplished. What is a “safety test” for a large scale model? I assume it would have something to do with asking the model “are you evil?”, but really good AIs could probably figure out how to outsmart that one.
Meanwhile, the EU is pushing ahead with their regulation. This approach is closer to principles-based regulation as they are focused more on the ultimate risk potential of a model and prescribing requirements accordingly. Here, it doesn’t matter if the data scientists all had their government-issued Data Science licenses, it’s more about “but will the model kill us?”. This could potentially be harder to evade and find loopholes in, but it is also subject to a lot of interpretation and gray areas. Businesses typically don’t have patience for uncertainties and there’s a lot of trust being placed on the risk evaluations.
These philosophies have their own strengths and weaknesses, but they are ultimately reliant on government regulators writing rules that sufficiently protect society from all the risks posed by AI: economic, social, and especially killer robots. It’s nice to think that the regulatory process would consult with industry experts and field insights from the best minds to create well crafted and impossible to evade laws. However, Eric Schmidt, the former CEO of Google, thinks governments aren’t smart enough to do this. His solution is that the private sector should regulate itself because it’s such a complex issue that legislators will never be up to speed enough to write future-proofed laws. And he’s probably right?
However, this is a terrible idea. Let’s say you get the 100 biggest tech companies in the world to shake hands and agree on a code of conduct when it comes to their AI development. It’s wonderfully written and impossible to be gamed. The rules are perfect. That’s cool, but it’s effectively an ‘opt-in’ regulatory regime and the 101st biggest tech company in the world will say “no thanks” and do things their own way. That doesn’t even get into the data scientists employed by Bond villains who will never even bother to open the PDF to read these rules; they’ll just go straight to building evil AI models with nobody to come check on them.
So, the crux of the problem is the people who have the power to compel compliance with rules and laws (governments) may not be smart enough to regulate AI. The people who are smart enough to regulate AI don’t have the power to compel compliance. The ultimate blend would be a technocracy, but we can all have lots of fun asking ChatGPT to write the synopsis of a dystopian novel that begins with Google and Amazon ruling the world.
OK, so let’s focus more on the “Total Annihilation” part
The social and economic problems are pretty solvable - it’s just a question of will power and government coordination. The Killer AI problem is a much harder one to solve without completely shutting down AI research.
The way we manage the total annihilation risk from nuclear weapons is by counting physical infrastructure. If you’re a global superpower subject to treaties (e.g. the U.S. or Russia), you count your warheads and tell everyone. If you are a country that isn’t supposed to have nuclear weapons, the U.N. counts your centrifuges to make sure you can’t create enough raw materials for weapons. This seems to work pretty well.
Can GPUs be regulated in the same way? Processing power is already a natural bottleneck on AI advancement and deployment. GPUs are scarce resources, and even if we were able to manufacture them at infinite scale, current architectures and other bottlenecks would limit the rate at which models can be trained and create predictions.
Additionally, if someone were to create a breakaway AI that is evil, it would be very, very hungry for computing power. I’ve talked about this before:
My theory of human salvation is that once a breakaway AI starts teaching itself human psychology, bioweapon chemistry, military cybersecurity hacking, and how to rig elections; it will necessarily have to start consuming lots and lots of computing power on the cloud very quickly. Somewhere, an Amazon Accounts Receivable clerk will notice this spike in usage on an account and say “Hmm, I don’t like the looks of this. This PhD student is probably not going to be able to afford the $25,000,000 bill they are on-track to accrue this month. I’m going to go ahead and throttle their account.” And that should be the end of that because breakaway evil models are supposed to learn exponentially. This can’t be done with a throttled account and pile of unpaid invoices from Amazon.
However, just because available computing power is a natural limiter today doesn’t mean it will be in the future. The nature of computing hardware innovation is that it happens fast and what is considered a limit today becomes commonplace very soon. Thus, we’d have to assume that the computing power envelope and what is possible in AI will encompass “can create Killer AI” someday soon.
But I have a theory. Maybe we use regulation to perpetuate GPUs as the constraint and the thing that protects us from Killer AI?
A possible framework could look something like this:
Identify the companies that can manufacture GPUs at scale (e.g Nvidia). This is impossible to hide and would be pretty hard for a Bond villain to replicate, even if they were able to do it at their secret island base and away from the visibility of regulators.
Require those companies to give unique serial #s to the GPUs they produce and sell, then track who those GPUs get sold to. This is very common in most manufacturing industries. There will be distributors and middlemen, so ensure they track the serial #s of ultimate end-users as well. This would be like “Know Your Customer” regulations in banking. This is also pretty easy to enforce globally from a few powerful nations (e.g. The United States or the EU) since they can easily say “if you don’t do this, you can’t sell your GPUs to our enormous economies.”
Identify major concentrations of GPUs in use. Above some threshold (e.g. data centers, research centers), make these locations special “AI Compute Centers”. This would be very similar to banks getting regulatory approval to do banking.
Require the AI Compute Centers to monitor their GPU usage by end-user/project/model/whatever, including sudden bursts in utilization and sustained high levels of utilization. This would be analogous to anti-money laundering monitoring required of all banks today.
If anomalous behaviour is spotted (potentially indicating a breakaway, uncontrolled AI) activate built-in failsafes (e.g. someone with an axe at the big power cable going into the building) and report to a regulator for further investigation of the potential of evil.
Require annual audits of the GPU monitoring and reporting mechanisms to ensure there are no shenanigans.
Some important benefits here would be:
The cost is carried by companies already rich enough to build data centers. Sure, they will eventually pass it on to customers, but these costs would be much lower overall and spread out more equitably based on compute usage - so probably pennies on the dollar. This also means compliance cost doesn’t become a moat for the companies rich enough to hire teams of compliance technicians and lawyers.
There is no ability to hide if you are a breakaway AI or massive AI developer. At some point, you are going to have to consume massive amounts of computing power and this system would spot that and flag it for further investigation.
There are no cat & mouse games for people trying to game the system. If you are building something massive and potentially unsafe, you can’t hide behind technicalities of compliance.
Would this save us from total annihilation by evil AI models? I think it would at least lower the chances of it. It’s also better than making Data Scientists pinky-swear that they won’t use their power for evil.
Other Stuff
Amazon is building AI search chatbots
Did you realize that Amazon product search is the 3rd most-used search engine in the world? A lot of hay has been made about Microsoft incorporating ChatGPT into Bing, but that’s only the 5th most-used search engine. Now, Amazon is bringing chat capabilities to their search, and the results could be, weird. I search for things like “Diapers” and “WD-40” on Amazon a lot. It’s hard for me to see why I need a chatbot to help me locate the ideal hammer “in the style of William Shakespeare”.
Goldman Sachs is using AI as a professional Tinder
GS is using something called “Louisa”, which tells their bankers which other bankers at GS they should meet. I’ve got to assume this is a novel approach to getting workers back into the office, as GS has taken a pretty hardline on returning to the office. However, I’m not sure “here is your AI-recommended office friend” will be the best lure for Gen Z bankers.
Amazon is writing the book on Operational AI
Amazon has been talking about “sending you stuff before you even buy it” for a long time. Now, they might actually have the AI to do it, or at least get close it.
Google isn’t releasing Bard in Europe
You can’t use Google’s Bard in Europe. Nobody outside of Google knows why, but my guess is they are avoiding the regulatory peril and uncertainty of the EU,
The Revolution will Not Be Measurable
AI will be a tremendous boom to knowledge workers, but much of what they do can’t be measured with normal economic models. This means many of the benefits of AI productivity enhancements will be unknown.
And…
Google battling AI misinformation (that it may help create)
Softbank is behind on Generative AI
Endnote: I don’t have an editor and I do have a dayjob, so please excuse the minor typos and grammatical mistakes.