Silicon Valley Bank, ChatGPT, AI … and the beauty of working in a country that doesn’t regulate technology

The U.S. has been the most pro-tech of any “Big-Country” governments.

And that nevah, evah gonna change.

18 MARCH 2023 (Crete, Greece) — When the Silicon Valley Bank melt-down began late week, the following weekend was filled with Silicon Valley industry mates of mine screaming about the imminent “end of the world”, flashing their thoughts across social media. It was like watching those inflatable marionettes outside car dealerships – swaying, falling, and rising with the shifting winds.

Which just seemed like a very strange and dangerous thing to do during a potential banking meltdown. Many people (rightfully so) interpreted it as a sign of pure unthinking panic, and some blamed the screaming VCs for causing the very bank runs they were warning about. And perhaps this was true.

But I think there was method to the madness — the VCs were trying to scare the government into action by raising the specter of a wider disaster, both in order to save the deposits of their portfolio companies, but also out of concern for the state of the overall economy. When government action materialized, they gave themselves at least partial credit. The bank was secured and the world was saved.

But perhaps they shouldn’t be so celebratory. I think the government did exactly the right thing in backstopping uninsured deposits without taxpayer funds, while letting managers, shareholders, and bondholders fail. But the startup world’s call for government intervention — which many will see, rightly or wrongly, as a bailout of VCs and startups — could come with a price. For many decades, the tech startup sector has been allowed to operate with a high degree of autonomy from the rest of the economy, with a helpful regulatory environment and government policies to encourage high-risk, high-reward activity. Now there was fear that if Washington D.C. starts to see the world of startups as a source of systemic risk, that situation could change. That the long and mutually beneficial relationship was in danger of breaking down.

Nah. The symbiotic relationship will continue. The U.S. has been the most pro-tech of “Big-Country” governments. That will continue.

Sequestered on the West Coast, far away from the bureaucrats of D.C. and the moneymen of New York, IT startups have operated as a sort of equity tranche for the U.S. economy — a high-risk, high-reward activity that was allowed to succeed or fail on its own merits. The startup scene gave us corporate stars like Amazon, Apple, Google, Intel, and Tesla — companies that Europe and developed East Asia seemed incapable of producing. And when tech blew up in 2000, taking billions in dotcom fortunes with it, it had only mild effects on the economy as a whole.

For the U.S. economy, tech startups have always represented a lot of upside and very little downside.

To its great credit, the U.S. government recognized this, and did everything it could to help the startup sector. The Bayh-Dole Act of 1980 encouraged universities to spin off their research into private companies. The Telecommunications Act of 1996 represented an unprecedented effort to remove regulatory barriers and foster competition in the burgeoning internet sector. It allowed startups to access networks created by incumbents, which helped prevent the internet from fragmenting into a set of walled gardens. The famous Section 230 made internet platforms not legally responsible for the content published there. There are many other laws that tried to encourage

More important than what the government did do, however, is what it didn’t do. The U.S.’ failure to implement onerous regulations on internet companies, like Europe did with its General Data Protection Regulation, is a proverbial dog that didn’t bark. Nor, despite a wave of anti-tech sentiment in the 2010s, did the U.S. government smash its IT sector like China did in a recent regulatory crackdown — when the government has come down on tech, as in the Microsoft antitrust case in 2001 or the more recent antitrust case against Google, it has been with the intent of preventing the industry from being dominated by a single big player.

As a team of Brookings scholars recently put it:

Global tech policy fragmentation is often attributed to differing values and political ideologies within key jurisdictions: the United States, the European Union, and China. In this narrative, the U.S. prefers digital laissez-faire; Europe opts for digital big-state socialism; and China pursues a politically motivated strategy of restricting some technologies and scaling up others to maintain social control.

NOTE TO MY READERS: the above is from “The anatomy of technology regulation” which is a very insightful read. Worth your time.

Meanwhile, the U.S. government’s attitude toward new technologies has been remarkably supportive. Regulation has been incredibly accommodating toward self-driving cars, and the government is spending a ton of money building electric vehicle charging stations all across the land. Nor has the government made any real attempt to regulate the fast-growing field of AI.

And on top of all that, the government has showered the startup sector with incentives for both financial and real investment. The QSBS tax break, for example, exempts startup investors from having to pay some of the tax on their capital gains. R&D tax credits apply to a wide swath of business activities in tech. There are a large number of state incentives as well.

This is a broad-brush treatment, to be sure. I do not have space to catalog all of the incentives. And there are, to be sure, plenty of instances of U.S. government meddling and regulatory errors. But libertarians who dream of a government that leaves companies completely alone are indulging in pure fantasy — government is, by its very nature, a meddlesome beast. It’s important to compare the U.S. government to other regulatory regimes that actually exist around the world. Only a few countries and they are quite small – Estonia, Israel, Singapore — are known for having more startup-friendly laws and regulations than the U.S. Among big-country governments, the U.S. stands out as the most favorable for companies trying to break into into the IT industry.

Which is probably why the world’s VC and startup ecosystems are centered in the U.S.

So despite occasional salvos of anti-tech rhetoric from some politicians, and despite bellows of libertarian bravado from cryptocurrency founders who bragged that they were going to replace governments with blockchains, the symbiotic relationship between the U.S. government and the startup sector has endured, to the great benefit of both. The U.S. gets lots of things wrong, but supporting tech startups is not one of them.

But if the U.S. government begins to perceive the startup sector as a source of systemic risk, that long-standing symbiosis could be in danger.

But why would the startup sector be a systemic risk? Startups, by their very nature, don’t seem like a systemic risk. They’re small companies, and when they go bust — as they very often do — it doesn’t shake the foundations of the economy. But the events of the past week show a few ways that taken together, they could pose a risk.

The Silicon Valley Bank collapse was due partly to rises in interest rates, which decreased the value of the bond portfolios that many banks hold, and made them more vulnerable to runs. The turmoil at Credit Suisse, over in Switzerland, demonstrates how the banking weakness extends beyond the tech sector and beyond America’s borders. But it’s impossible to ignore the fact that in the U.S., all of the banks that either failed or were in danger of failing catered heavily to the technology sector. Silvergate and Signature Bank, two of the other failed banks over the weekend, catered to depositors from the crypto world, while SVB, and to a lesser extent First Republic and PacWest, took many deposits from startups on the West Coast.

Startup and crypto depositors tend to keep very large amounts of cash in their bank deposits. Startups do this because they tend not to have enough revenue to pay their expenses — instead, they get big chunks of cash from VCs and keep it around until they need to use it. Crypto companies do this as part of various financial schemes, such as stablecoins. Those big cash deposits are “jumbo” deposits, meaning they’re way over the traditional $250,000 FDIC insurance limit. Having a lot of jumbo deposits makes banks more vulnerable to runs — or at least it did, before this week’s bank rescue. Meanwhile, taking a lot of deposits from the startup sector means your depositors’ deposits and withdrawals tend to be correlated; when VC funding dries up, money can move out fast. That’s why tech-friendly banks were the quickest to collapse — in a way, they were sort of the subprime sector of medium-sized banks, not because of risky loans but because of risky deposits.

In the wake of the rescue, tech-friendly banks may now come under much tighter regulation. Even though they don’t have assets above the official $250B cutoff to be recognized as “systemically important” financial institutions under Dodd-Frank, everyone now recognizes that they are systemically important. And systemic importance comes with regulation to limit risk. Justin Wolfers, an economist who has a very good grasp of the banking sector, suggested in a podcast this week that the promise of new regulation is one reason some medium-sized bank stocks have dropped, in spite of the deposit guarantee:

Although SVB and other banks failed because of risky deposits, future regulation could also stop them from making risky investments. SVB is famous for pioneering the practice of loaning money to startups, and it also acted as a venture capitalist in its own right; those activities may both now be sharply curbed.

But bank failure is not the only possible source of systemic risk from startups. These days, tech companies are staying private longer and longer, so that so-called “startups” can actually be big companies that many other companies rely on for their day-to-day activities. An example is payment processors like Rippling, several of which banked with SVB. If these companies had been unable to get cash out, lots of other companies might have been unable to make payroll, causing a seizure in parts of the non-tech economy.

The government will certainly be looking for more cases like Rippling.

An even bigger source of systemic risk is crypto itself. Crypto started with a bellow of anti-government defiance — Bitcoin was supposed to replace central banks and fiat currencies, a transition that would have undoubtedly led to a massive economic collapse. Fortunately, that’s not going to happen (though those anti-government bellows certainly didn’t do much for the startup world’s image in D.C.).

Instead, the danger is more quotidian. Crypto is basically just unregulated finance, and unregulated finance has a tendency to collapse spectacularly, even when it’s not fraud like in the case of Sam Bankman-Fried and FTX. And as linkages grow between the unregulated crypto world and the world of traditional finance, the danger grows that collapses in crypto could spill over into the U.S. banking system in ways that they never have before. In the wake of Silvergate and Signature, the U.S. government will undoubtedly be looking harder for these growing linkages, and for ways to regulate crypto to reduce the danger of spillovers.

U.S. regulators and legislators may also now be more fearful of the burgeoning generative AI industry, GPT-4 being just one example. No one really knows how large language models will affect finance. Will people use them to trade assets? To foment fear, pump stocks, or manipulate markets? It’s not clear yet, but after the events of the past week, it’s possible that the government will try to get out in front of future problems by regulating AI today.

All these possibilities have one thing in common: they’re all about finance and debt. In a deep sense, financial risk is systemic risk — it’s when banking systems collapse that economies go into long, deep recessions. And debt defaults, of one sort or another are what cause banking systems to collapse. Equity bubbles, in contrast, are much less damaging — witness the limited fallout from the dotcom crash, or the fact that big tech stocks crashed and VC funding dried up over the past year without causing any noticeable harm to the overall economy. It’s only really when tech startups start affecting the world of loans and bonds and bank deposits that they become the kind of thing that could really bring the economy down. As the startup sector has grown larger, and has expanded more into financial businesses, its connections to that world of debt have grown.

Anyway, the things I listed above are just a few ways that the government might conclude that the VC/startup sector, which looked like all upside just a couple of years ago, is now a source of economic risk. The larger point here is that the collapse of SVB, and the way VCs clamored for a government rescue, could focus the regulatory state’s roving Eye of Sauron on a sector that it had previously ignored.

In this situation, if I were people in the VC/startup world, I would cool down any anti-government rhetoric. In happier times, libertarian outbursts were just part of the ecosystem of ideas that kept Silicon Valley and the U.S. government at arms length, allowing their symbiotic relationship to persist. Today, after VCs loudly and successfully demanded government help for their portfolio companies, angry attempts to blame the government for the whole debacle will look like biting the hand that feeds, and will risk refocusing the techlash from big tech companies to VCs.

Beyond that, if I were people in the VC and startup world, I would think twice about expanding fintech startups’ role in the world of lending and debt, and their connections to the traditional banking sector. When tech can boom and crash without taking down the banks, it’s free to innovate and build and take risks — and the government will have a strong incentive to leave it unmolested. That formula has worked well until now; it would be a shame to tear up a favorable arrangement.

But Silicon Valley’s most effective weapon is simple. It is the heart of the American innovation economy. Whether politicians and media like it or not, the technology sector is one of the few engines of growth in the U.S. The long-term war with its new geopolitical rivals will continue to be fought on the technology front. A disruption in the innovation ecosystem means a setback whose impact will be felt over the long term. I can understand that the general population, populist politicians, and media have faint regard for Uber-rich tech giants, blowhard billionaires, and the lack of empathy and morality in emergent technologies — but that’s missing the forest for the trees.

And what to do about ChatGPT and all of the generative AI being produced? This week the knee-jerk politicians at the U.S. Congress were mindlessly rabbiting on how these new language models may not seem all that dangerous, but there are worst risks – the ones we cannot anticipate. And frankly, the more time I spend with A.I. systems like GPT-4, the less I’m convinced that we know half of what’s coming down the pike. But even the U.S. Congressional teams that are in charge of developing legislation to regulate AI said ChatGPT is causing them to “back-off” because they do not understand it. As one said “it seems to be both the benign and the malignant”.

As I noted in a long piece last weekend about the coming “textpocalypse”, our relationship to writing (and image) is about to change forever – and it might not end up well. What if, in the end, we are done in not by intercontinental ballistic missiles or climate change, not by microscopic pathogens or a mountain-size meteor, but by … text? Simple, plain, unadorned text, but in quantities so immense as to be all but unimaginable—a tsunami of text swept into a self-perpetuating cataract of content that makes it functionally impossible to reliably communicate in any digital setting?

That, I think, is the biggest danger – but well beyond the scope of regulation. As Stephen Marche (the essayist and cultural commentator) says:

“We are all going to be challenged by ‘the Big Blur’. Thanks to AI, every written word now comes with a question. The question will be simple but perpetual: Person or machine? Every encounter with language, other than in the flesh, will now bring with it that small, consuming test”.

And he is right. For some – teachers, professors, journalists – the question is existential, urgent, essential. Who made these words? For what purpose? And for those who operate in the large bureaucratic apparatus of boilerplate – advertisers, copywriters, lawyers, political strategists – the question will be irrelevant except as a matter of efficiency. How will they use new artificial-intelligence technology to accelerate the production of language that was already mostly automatic? And for everyone, the question will now hover – quotidian and cosmic – over words wherever you find them: Who’s there? Really?

At its core, technology is a dream of expansion — a dream of reaching beyond the limits of the here and now, and of transcending the constraints of the physical environment: frontiers crossed, worlds conquered, networks spread. But in our post-Turing-test world this is not a leap into the great external unknown. It’s a sinking down into a great interior unknown. The sensation is not enlightenment, sudden clarification, but rather eeriness – a shiver on the skin if you like.

And as AI systems become more integrated into our lives, they will alter the foundations of society – sans regulation. They will change the way we work, the way we communicate, and the way we relate to one another. They will challenge our assumptions about what it means to be human, and will force us to confront difficult questions about the nature of consciousness, the limits of knowledge, and the role of technology in our lives.

To conclude, I must add that in both of these stories … Silicon Valley Bank and generative AI … you do not need to dig too deep before you hit shit. In SVB’s case, we should all be furious with its executives for their incompetent risk management and poor communications strategy. If you work in start-ups or the technology sector, you should be angry at the venture capitalists who spurred a bank run at SVB only to turn around and beg for Uncle Sam’s help with impudence. Perhaps most of all, you should be furious with the American government – not for its creation and execution of a bailout to protect the broader financial sector, but because its help was needed in the first place. You should be mad at Congress for loosening regulations on medium-size banks like SVB five years ago. And you should be mad at the federal and state regulators who, as supervisors of the system, allowed this mess to happen.

On the generative AI side I tend to agree with the Congressman who I quoted above: this AI can be both the benign and the malignant.

I gave GPT-4 all the info we have about a new film production: my sloppy meeting notes, bullet points extracted from emails between the video team, a stack of historical notes from my research team, and parts of an AI generated memo based on 100s of URLs and Youtube links.

It reorganized and consolidated all of into an easy-to-read narrative and storyboard 😳.

And another. Spreadsheets are really storytelling tools. They run formulas to show potential scenarios. We all know that. The litigation support team at my eDiscovery division fed in a stack of spreadsheets related to a legal case and ran a well-engineered prompt (we used a professional prompt engineer, a new specialty) with data visualization. It in effect bypassed the whole columns/rows/cells paradigm, using engineered word/number prompts to get a motion visualisation nobody would have ever thought of 😳 😳.

We have all seen similar examples over the last few months. ChatGPT and its progeny have further shifted our understanding of how AI and human creativity might interact. Structured as a chatbot – a program that mimics human conversation – ChatGPT is capable of a lot more than conversation. When properly entreated, it is capable of writing working computer code, solving mathematical problems and mimicking common writing tasks, from book reviews to academic papers, wedding speeches and legal contracts.

Yet, the bigger issues remain.

Taken at their core, AI image and text generation are pure primitive accumulation: expropriation of labor from the many for the enrichment and advancement of a few Silicon Valley technology companies and their billionaire owners. These companies make their money by inserting themselves into every aspect of everyday life, including the most personal and creative areas of our lives: our secret passions, our private conversations, our likenesses and our dreams. They enclose our imaginations in much the same manner as landlords and robber barons enclosed once-common lands. They promise that in doing so they will open up new realms of human experience, give us access to all human knowledge, and create new kinds of human connection. Instead, they are merely selling us back our dreams repackaged as the products of machines, with the only promise being that they’ll make even more money advertising on the back of them, or some such other money generator.

As I noted above, as AI systems become more integrated into our lives, they will alter the foundations of society. They will change the way we work, the way we communicate, and the way we relate to one another. They will challenge our assumptions about what it means to be human, and will force us to confront difficult questions about the nature of consciousness, the limits of knowledge, and the role of technology in our lives.

Oh, such fun in store.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top