How to Be Optimistic About Technology Now

When I was much younger, I assumed people who were optimistic must have misplaced confidence. How anyone could see a future so bright was a complete mystery, I reasoned, when what we are exposed to is a series of mistakes and then attempts at correction from public officials, corporate executives, and others. This is not conducive to building hope — until I spotted the optimistic part: in the efforts to correct the problem and, ideally, in preventing the same things from happening again.

If you measure your level of optimism by how much course-correction has been working, then 2023 was a pretty hopeful year. In the span of about a decade, a handful of U.S. technology firms have solidified their place among the biggest and most powerful corporations in the world, so nobody should be surprised by a parallel increase in pushback for their breaches of public trust. New regulations and court decisions are part of a democratic process which is giving more structure to the ways in which high technology industries are able to affect our lives. Consider:

That is a lot of change in one year and not all of it has been good. The Canadian government went all-in on the Online News Act which became a compromised disaster; there are plenty of questions about the specific ways the DMA and DSA will be enforced; Montana legislators tried to ban TikTok.

It is also true and should go without saying that technology companies have done plenty of interesting and exciting things in the past year; they are not cartoon villains in permanent opposition to the hero regulators. But regulators are also not evil. New policies and legal decisions which limit the technology industry — like those above — are not always written by doddering out-of-touch bureaucrats and, just as importantly, businesses are not often trying to be malevolent. For example, Apple has arguably good reasons for software validation of repairs; it may not be intended to prevent users from easily swapping parts, but that is the effect its decision has in the real world. What matters most to users is not why a decision was made but how it is experienced. Regulators should anticipate problems before they arise and correct course when new ones show up.

This back-and-forth is something I think will ultimately prove beneficial, though it will not happen in a straight line. It has encouraged a more proactive dialogue for limiting known negative consequences in nascent technologies, like avoiding gender and racial discrimination in generative models, and building new social environments with less concentrated power. Many in tech industry love to be the disruptor; now, the biggest among them are being disrupted, and it is making things weird and exciting.

These changes do not necessarily need to be made from the effects of regulatory bodies. Businesses are able to make things more equitable for themselves, should they so choose. They can be more restrictive about what is permitted on their platforms. They can empower trust and safety teams to assess how their products and services are being used in the real world and adjust them to make things better.

Mike Masnick, Techdirt:

Let’s celebrate actual tech optimism in the belief that through innovation we can actually seek to minimize the downsides and risks, rather than ignore them. That we can create wonderful new things in a manner that doesn’t lead many in the world to fear their impact, but to celebrate the benefits they bring. The enemies of techno optimism are not things like “trust and safety,” but rather the naive view that if we ignore trust and safety, the world will magically work out just fine.

There are those who believe “the arc of the universe […] bends toward justice” is a law which will inevitably be correct regardless of our actions, but it is more realistic to view that as a call to action: people need to bend that arc in the right direction. There are many who believe corporations can generally regulate themselves on these kinds of issues, and I do too — to an extent. But I also believe the conditions by which corporations are able to operate are an ongoing negotiation with the public. In a democracy, we should feel like regulators are operating on our behalf, and much of the policy and legal progress made last year certainly does. This year can be more of the same if we want it to be. We do not need to wait for Meta or TikTok to get better at privacy on their own terms, for example. We can just pass laws.

As I wrote at the outset, the way I choose to be optimistic is to look at all of the things which are being done to correct the imbalanced and repair injustices. Some of those corrections are being made by businesses big and small; many of them have advertising and marketing budgets celebrating their successes to the point where it is almost unavoidable. But I also look at the improvements made by those working on behalf of the public, like the list above. The main problem I have with most of them is how they have been developed on a case-by-case basis which, while setting precedent, is a fragile process open to frequent changes.

That is true, too, for self-initiated changes. Take Apple’s self-repair offerings, which it seems to have introduced in response to years of legislative pressure. It has made parts, tools, and guides available in the United States and in a more limited capacity across the E.U., but not elsewhere. Information and kits are available not from Apple’s own website, but a janky looking third-party. It can stop making this stuff available at any time in areas where it is not legally obligated to provide these resources, which is another reason why it sucks for parts to require software activation. In 2023, Apple made its configuration tools more accessible, but only in regions where its self-service repair program is provided.

People ought to be able to have expectations — for repairs, privacy, security, product reliability, and more. The technology industry today is so far removed from its hackers-in-a-garage lore. Its biggest players are among the most powerful businesses in the world, and should be regulated in that context. That does not necessarily mean a whole bunch of new rules and bureaucratic micromanagement, but we ought to advocate for structures which balance the scales in favour of the public good.

If there was one technology story we will remember from 2023, it was undeniably the near-vertical growth trajectory of generative “artificial intelligence” products. It is everywhere, and it is being used by normal people globally. Yet it is, for all intents and purposes, a nascent sector, and that makes this a great time to set some standards for its responsible development and, more importantly, its use. Nobody is going to respond to this perfectly — not regulators and not the companies building these tools. But they can work together to set expectations and standards for known and foreseeable problems. It seems like that is what is happening in the E.U. and the United States.

That is how I am optimistic about technology now.