In testimony before the House Committee on Energy and Commerce, CEO Shou Zi Chew struggled to reassure lawmakers that the massively popular social video app doesn’t pose a risk to its 150 million users nor share user data with the Chinese Communist Party (CCP). But he admitted that TikTok had collected location data on U.S. users in the past, and said some historical data is still stored in servers that could be accessed by engineers from ByteDance, its parent company based in China.
Members of both parties spent hours denouncing TikTok’s data collection practices and painting it as a tool used by the Chinese government to track and spy on Americans. Before lawmakers even began their questioning, GOP Rep. Cathy McMorris Rodgers of Washington, the committee’s chair, said they “do not trust that TikTok will ever embrace American values.”
I had today’s hearing playing in the background and it was a tense and frustrating back-and-forth. As is typical for these kinds of hearings, lawmakers mostly soapboxed relentlessly, asked complicated questions they framed as a simple matter of yes or no, and did not let Chew finish answering. The word “communist” was used with the same frequency and tone as the word “fuck” in “Uncut Gems”. It was very clear, from the outset, that most committee members were not much interested in investigating, but were instead trying to justify a forthcoming likely vote to ban TikTok from the United States. Perhaps appropriately, a representative named McCarthy supports a TikTok ban.
Chew, meanwhile, played the same role as any tech company CEO who has sat in that chair before him, and either reiterated talking points or said he would get back to lawmakers with more complete answers. One notable awkward moment was when Rep. Debbie Lesko, of Arizona, asked Chew whether he agreed that the Chinese government had persecuted Uyghurs, and he would not answer, though he did associate it with his “concern[s] about all accounts of human rights abuse”. If you scrunch that up a bit, it kind of looks like a “yes”, but only kind of.
The concerns raised were, as many members acknowledged, bipartisan and (nearly) universal, but they differed in focus. Republican representatives were overwhelmingly concerned about the company’s Chinese government connections, but also questioned its ability to effectively moderate users’ posts. They repeatedly cited how effective the TikTok’s moderation responses were in places like Singapore, with its stricter drug laws, and how much faster moderators work on Douyin, its Chinese-market equivalent. These were glowing endorsements of more interference by private companies in users’ posts coming from Republican lawmakers. Meanwhile, Democrats often used their time to interrogate Chew about the effects of TikTok on mental and physical health, particularly in children. During a break in the hearing, Geoffrey Fowler of the Washington Postlamented that the topics were getting all mixed up and there was little substantive questioning.
There were rare moments of clarity and productive questioning; Rep. Lori Trahan provided one of them:
Rep. Trahan: In 2021, the U.K.’s Age Appropriate Design Code went into effect, mandating 15 standards that companies like you need to follow to protect children on your platform. You still operate in the United Kingdom, which means you should be in compliance with this code. So my question is simple: will you commit to extending the protections currently afforded children in the U.K. to the millions of kids and teens who use your app here, in the United States?
Chew: We take the safety of the younger users on our platform very seriously —
Rep. Trahan: This is a good way to prove it.
For me, this exchange underscored how important it is for lawmakers to set meaningful standards. All social media companies are going to maximize their user base within legal limits. TikTok, like other platforms, sets restrictions for users who are old enough to create accounts but still minors. But, like every other platform, it will not willingly reduce its user base — why would it?
Ahead of the hearing, Rep. Trahan authored a thoughtful op-ed in the Boston Globe noting, among other things, the reason why TikTok is under a unique spotlight:
Finally, the American people have to understand how TikTok went from a relatively obscure part of the national security conversation to a full-blown proxy as tensions escalate with China. The short answer: Big Tech corporations in America.
Rep. Trahan is not wrong. The national security concerns raised during the hearing were mostly hypothetical, often speculating about algorithmic manipulation and covert influence campaigns. The most concrete fears were borne of a Chinese national security law which compels companies based there to surreptitiously hand over user data when demanded by the government. One representative called TikTok the equivalent of a Chinese spy in Americans’ pockets.
As a Canadian watching this hearing, I could not help but raise an eyebrow. The U.S. has similar policies but dominates the tech industry. Is just about every other hardware and software product an American spy in the pockets of users worldwide? I take it this is not a moral objection but a political one. It is grossly oversimplifying the situation to claim that U.S. lawmakers are using their power to assist domestic businesses confronting a large and foreign competitor, but it is notable how much time is being spent confronting TikTok specifically instead of building a privacy framework that would limit its risk. After all, a good and worthwhile national privacy law would also kneecap many Silicon Valley giants. Again, this is only one component of a very complex picture, but it is worth mentioning.
It does appear U.S. lawmakers are heading full-speed toward voting for a TikTok ban or forced divestment. The Electronic Frontier Foundation says there would be numerous legal challenges should banishment be on the table.
As mentioned repeatedly today, a “Select Committee on Foreign Interference Through Social Media” in the Australian parliament produced a report — PDF download, not viewable inline — which concluded that TikTok “can no longer be accurately described as a private enterprise”. It is on my reading list. If it is as claimed, it will be more comprehensive than anything from this hearing.
FingerprintJS has a demo built into its homepage, https://fingerprint.com. When you visit this website, they generate a visitor ID (fingerprint) which is unique for your browser. So even if you clear the cache (and other site data) or visit the site in Private Browsing mode, they can generate the same ID and correlate with your previous visit.
My visitor ID was stable in Safari after visiting fingerprint.com only in private windows across two separate sessions. This, despite using Safari’s anti-tracking features, having iCloud Private Relay switched on, and using browser extensions which limit what kinds of scripts are able to run in my browser — and, again, accessing it only in private windows. On its homepage, FingerprintJS says the “VisitorID will remain the same for years, even as browsers are upgraded”. It can be, near as makes no difference, a permanent personal identifier.
The writer notes Firefox has a resistFingerprinting setting which does appear to prevent whatever techniques FingerprintJS is using by restricting access to some APIs. However, as this technology is also used to check that website visitors are real people and to reduce credit card fraud, I imagine it could prove restrictive. There are already many websites which challenge me to prove I am not a bot simply because I am using Safari; the same sites do not present so many challenges in Chrome.
Shou Zi Chew is not a prolific TikToker. The 40-year-old CEO of the Chinese-owned app has just 23 posts and 17,000 followers to his name – paltry by his own platform’s standards.
On Thursday Chew will appear before a US congressional committee, answering to lawmakers’ concerns over the Chinese government’s access to US user data, as well as TikTok’s impact on the mental health of its younger user base. The stakes are high, coming amid a crackdown on TikTok in the US to Europe. In the past few months alone, the US has banned TikTok on federal government devices, following similar moves by multiple states’ governments, and the Biden administration has threatened a national ban unless its Chinese-owned parent company, ByteDance, sells its shares.
Chew’s prepared remarks (PDF) are a mostly the kind of boilerplate stuff you would expect from a tech company CEO brought before lawmakers in the 2020s, albeit with the unique twist that he must defend TikTok against accusations it is a vessel for foreign espionage. And, as is common for these kinds of hearings, I expect little substantive questioning. However, I am anticipating someone will seize upon the last sentence in this paragraph:
Next, I want to address what we’re working on now. I know that there has been a lot of speculation about Project Texas recently based on media coverage. While conversations with the government are ongoing, our work on Project Texas has continued unabated. We are working hard every day to reach new milestones. For example, earlier this month, we began the process of deleting historical protected U.S. user data stored in non-Oracle servers; we expect this process to be completed later this year. When that process is complete, all protected U.S. data will be under the protection of U.S. law and under the control of the U.S.-led security team. Under this structure, there is no way for the Chinese government to access it or compel access to it.
One way to read this is that TikTok’s current structure does permit some amount of Chinese government access. This is contradicted later in the remarks — “[…] the inaccurate belief that TikTok’s corporate structure makes it beholden to the Chinese government or that it shares information about U.S. users with the Chinese government” — so I do not think this is some kind of telling slip. But if some representative wants to use their time to needle at the exact language of this statement, I am sure they will.
In September, my family and I move from our home in Dublin to a fancy East Coast college town, where I’ll be teaching for the semester. I grew up in Dublin, which means I have a wide circle of friends to draw on whenever I’m let out of the house. The street where I live is friendly: If I want to borrow a spatula or I need someone to look after my cat, I have only to ask.
On my initial visits, the metaverse seems sort of desolate, like an abandoned mall, and ordinarily I wouldn’t be lining up to join the misfits still populating it. Now that I’m away from my social network, though, I realize how much heavy lifting was being done by the brief, bantering, checking-in conversations I used to have with my friends and neighbors. So I’m determined to find the metaverse’s true believers, those left behind when the rest of fickle reality has moved on. They may not be able to lend me a spatula, but I’ve decided that, for now at least, these will be my people.
There are tiny bleak details in Murray’s exploration of Horizon Worlds — Murray describes the way it “flattens” social interactions, for example, in a way not too dissimilar from the context collapse of social media — but do not let that distract you from the overall sadder perspective. I have not been sold on Meta’s idea of the virtual reality world, and this piece did not make me a convert. It got reactions out of me, though — mostly laughter, occasionally gasps.
I do not want to be entirely dismissive here; Meta may be onto something, albeit early and poorly. But I think it is telling how hard it is to justify this experience compared to how quickly A.I.-generated text and images found a place in the world. After all the swings and misses of the past few years — Web3, NFTs, Meta’s take on augmented reality — people immediately found uses for things like ChatGPT. It has a technical name that reveals almost nothing, and is hosted on a subdomain of the website of OpenAI, a company that current users of its products probably had not heard of a year ago. The financial burden of getting into Meta’s virtual world or becoming mired in cryptocurrency nonsense certainly plays a role in reduced adoption. What we have learned from other devices is that people will pay a price if they can see how it will fit into their lives. So far, this virtual reality stuff is not going well.
These “big tech pays news” schemes break this fundamental idea. They announce that some companies, these big companies who apparently no one likes, must suddenly pay to link. And sure, you can easily state (1) these big companies can afford it, and (2) no one likes them any way, so maybe you think that’s good. But nothing good comes from breaking the fundamental principles of the open web.
Once you break this concept of the freedom to link, you’re flinging open Pandora’s box to all sorts of mischief. Once industries learn that the government has no problem stepping in and forcing companies to pay for links, does anyone really believe it will stop at news organizations? Of course it won’t. Then the whole internet just becomes a food fight for lobbyists to argue with politicians over which industries they can force to subsidize other industries.
It’s pure unadulterated crony capitalism at its worst. Those with the best connections get to have the government force those with weaker connections to subsidize their own failures to innovate and compete.
Regardless of what merits anyone sees in a link tax for companies like Google and Meta, this is the only argument which truly matters. It is the best explanation for why any site must be permitted to link to another without cost or penalty.
These two companies — which Masnick insists on calling “tech companies”, even though they are “advertising companies” in this context — take a disproportionate share of digital ad spending through, allegedly, no small amount of collusion. It is very difficult, almost impossible, to advertise on the web without giving money to Google. Even if media sites actively crave traffic from Google’s search engine and news aggregator by employing search optimization experts, for example, the resulting ad views will also benefit Google. Even if a Canadian company wants to advertise to a Canadian customer, there is little getting around paying a U.S.-based company something like 30% of the cost of the ad. This is not just the story of legacy systems failing to adapt to a new reality; it is the story of incumbents being crowded out at scale.
Even so, this is a bad bill and sets a terrible precedent. The issues above are at least partly addressable through enforcement of competition laws, not by putting a fee on links. The Canadian government is making it even worse through, as Michael Geist puts it, a “fishing expedition” in the communications of these platforms. This needs to stop.
The irony, noted The Line, was that the Heritage Committee was demanding by month’s end to see documents from Google and Facebook which, if journalists sought them in an Access to Information request to Ottawa, “would take years to get such a request fulfilled, and half if it would come back redacted.” It concluded that the politicians had things backwards. “The government is subject to this kind of transparency and disclosure because the government works for us. Not the other way around.”
Ignore the headline on this article — which I think takes the concerns a few steps too far — but do focus on what is written here. It is shameful.
Although overall global smartphone sales in 2022 fell 12% YoY due to macroeconomic difficulties, global premium (≥$600 wholesale price) smartphone sales climbed 1% YoY in contrast, which allowed the price segment to contribute to 55% of the total global smartphone market revenue for the first time ever.
The premium segment, which has been consistently outperforming the global smartphone market, captured more than one-fifth of total global smartphone sales as well.
After nearly 25 years of operation, DPReview will be closing in the near future. This difficult decision is part of the annual operating plan review that our parent company shared earlier this year.
The site will remain active until April 10, and the editorial team is still working on reviews and looking forward to delivering some of our best-ever content.
I had no idea DPReview’s parent company is Amazon — shows how often I scroll to the footer, hey? — but this is heartbreaking news. Everett says the site will be available “for a limited period” following the stop publication date; not a promising phrase.
This has a local news component to it, too: Chris Niccolls and Jordan Drake, who make the company’s well-regarded review videos, are based in Calgary. They made a farewell video and say there is more to come.
During a Europe trip in May 2022, I uploaded over 6,000 photos and hundreds of videos to the cloud. Upon editing and deleting some of the photos, I encountered an issue with Photos.app, which ultimately led to the complete wiping out of my entire cloud library. Despite my efforts to recover the lost data using tools such as Disk Drill and contacting Apple support, no useful recoverable files could be found. Unfortunately, Apple support refused to escalate the issue to the engineering team due to the use of a beta version of macOS Ventura.
The loss of my lifetime memories and tens of thousands of dollars’ worth of intellectual property is one of the most devastating experiences of my life. I believe it is crucial to highlight the importance of backing up all data, including cloud content, to prevent such a catastrophic loss.
This is among my worst nightmares and I feel terrible for Hill. It feels like an echo of problems of the past.
Cloud syncing is not a backup. Apple doesn’t provide a way to directly download a backup of your photos, so you need to have Photos set to Download Originals to this Mac, which for most people means that they need to fit on your internal SSD. Then you can back them up yourself with Time Machine or to another cloud provider.
I agree with this on principle; in practice, though, a company like Apple probably has a better backup regimen than you or I do. I lost about a year’s worth of photos I took on my phone — from about mid-2013 through mid-2014 — because I backed up my iPhone to my computer, restored it, and it proceeded to overwrite the backup because it had the same UDID. Lesson learned: always back up your backup.
I expect big cloud providers have more redundancies than most of us do, which makes it all the more disappointing that none of this stuff comes with a guarantee. That is, even if Hill were only running production operating systems, Apple — like every major provider of consumer cloud services — does not obligate itself to perform data restoration. iCloud Photo Library may only officially be a syncing service, but it is easy to think of it as something more robust, especially when it is described as “safe”, where your photos are “always available”, “without worrying about space on your devices”. Not sure about you, but after reading Hill’s experience, I am less concerned about disk space and more concerned about whether those photos are actually as safe as they ought to be.
Introducing acropalypse: a serious privacy vulnerability in the Google Pixel’s inbuilt screenshot editing tool, Markup, enabling partial recovery of the original, unedited image data of a cropped and/or redacted screenshot.
This was reported as CVE-2023-21036; Google says a fix is rolling out to Pixel devices. That is the good news. The very bad news is that every screenshot from a Pixel 3 or newer device is vulnerable to this bug, so long as the image has not been passed through an intermediate re-encoding process.
Although I’m currently an iPhone user, I used to use a Pixel 3XL. I’m also a heavy Discord user, and in the past I’d shared plenty of cropped screenshots through the Discord app.
I wrote a script to scrape my own message history to look for vulnerable images. There were lots of them, although most didn’t leak any particularly private information.
The worst instance was when I posted a cropped screenshot of an eBay order confirmation email, showing the product I’d just bought. Through the exploit, I was able to un-crop that screenshot, revealing my full postal address (which was also present in the email). That’s pretty bad!
There are some people in the replies to Aarons’ tweet claiming the same is true for cropped iPhone screenshots. In my testing, that is not exactly right: if you crop an iPhone screenshot and then share it with default sharing options, it does not transmit edit history or removed image data, as best as I can tell. When I AirDropped a cropped screenshot to myself, it sent a re-encoded JPG image instead of the HEIF original. In the iOS Share sheet, there is an “Options” button; if you want, you can toggle the switch to “include all photos data” which includes “edit history and metadata”, including image data removed via cropping. This option is off by default.
Update: It appears screenshots cropped with Windows 11’s Snipping Tool are also vulnerable.
During Gatekeeper checks, two internet connections are normally made, to api.apple-cloudkit.com for notarization checks, and ocsp2.apple.com for the validity of the code-signing certificate. The first of those was attempted once, failed, and was promptly abandoned. The OCSP check was attempted multiple times, but was also abandoned quickly.
Failure of those two online checks didn’t prevent or delay successful app launching.
It sounds like this is a more graceful failure mode than the situation which plagued the Big Sur launch. A reminder that Apple said it would introduce a way of opting out of OCSP and, years after making that promise, it has yet to do so.
This blog post and OpenAI’s recent actions — all happening at the peak of the ChatGPT hype cycle — is a reminder of how much OpenAI’s tone and mission have changed from its founding, when it was exclusively a nonprofit. While the firm has always looked toward a future where [Artificial General Intelligence] exists, it was founded on commitments including not seeking profits and even freely sharing code it develops, which today are nowhere to be seen.
Will this AI be shared responsibly, developed openly, and without a profit motive, as the company originally envisioned? Or will it be rolled out hastily, with numerous unsettling flaws, and for a big payday benefitting OpenAI primarily? Will OpenAI keep its sci-fi future closed-source?
This was published February 28, roughly two weeks before GPT-4 was launched.
Speaking to The Verge in an interview, Ilya Sutskever, OpenAI’s chief scientist and co-founder, expanded on this point. Sutskever said OpenAI’s reasons for not sharing more information about GPT-4 — fear of competition and fears over safety — were “self evident”:
“On the competitive landscape front — it’s competitive out there,” said Sutskever. “GPT-4 is not easy to develop. It took pretty much all of OpenAI working together for a very long time to produce this thing. And there are many many companies who want to do the same thing, so from a competitive side, you can see this as a maturation of the field.”
In addition to effort and competition, Sutskever also raises questions about what it would mean for safety if the company was more transparent — something Schmidt pushes back on — while Vincent documents potential legal liability. But are these not foreseeable complications, at least for competition and safety? Why maintain the artifice of the OpenAI non-profit and the suggestive name? A growing problem with things like these is questions about their trustworthiness; why not pick a new name that is not, you know, objectively incorrect?
I though Joanna Stern’s look, for the Wall Street Journal, at the second-hand smartphone market was interesting as a whole, but one of the things Stern wrote is particularly notable:
[…] What I do know is that phones really do have a circle of life — cue Elton John — and selling booger-free used phones can benefit carriers, resellers and maybe Apple, too: Even a refurbished iPhone means a blue bubble in your messaging app.
As Apple’s services business is increasingly important to its overall financial picture, the long-tail effect of a healthy second-hand phone market is undoubtably attractive to the company. It reduces the cost for buying into the ecosystem while reducing pricing pressure on Apple’s part.
If your local library does not have a subscription to the Journal, the video is also on YouTube.
On a rainy Tuesday in San Francisco, Apple executives took the stage in a crowded auditorium to unveil the fifth-generation iPhone. The phone, which looked identical to the previous version, had a new feature that the audience was soon buzzing about: Siri, a virtual assistant.
This first paragraph vignette has problems — and, no, I cannot help myself. The iPhone 4S and Siri were unveiled on October 4, 2011 at Apple’s campus in Cupertino, not in San Francisco, and it did not rain until that night in San Francisco. It was a Tuesday, though.
Please note there are three bylines on this story.
Anyway, the authors of this Times story attempt to illustrate how voice assistants, like Siri and Alexa, have been outdone by products like OpenAI’s ChatGPT:
The assistants and the chatbots are based on different flavors of A.I. Chatbots are powered by what are known as large language models, which are systems trained to recognize and generate text based on enormous data sets scraped off the web. They can then suggest words to complete a sentence.
In contrast, Siri, Alexa and Google Assistant are essentially what are known as command-and-control systems. These can understand a finite list of questions and requests like “What’s the weather in New York City?” or “Turn on the bedroom lights.” If a user asks the virtual assistant to do something that is not in its code, the bot simply says it can’t help.
The article’s conclusion? The architecture of voice assistants has precluded them from becoming meaningful players in artificial intelligence. They have “squandered their lead in the A.I. race”; the headline outright says they have “lost”. But hold on — it seems pretty early to declare outright winners and losers, right?
It’s not surprising that sources have told The New York Times that Apple is researching the latest advances in artificial intelligence. All you have to do is visit the company’s Machine Learning Research website to see that. But to declare a winner in ‘the AI race’ based on the architecture of where voice assistants started compared to today’s chatbots is a bit facile. Voice assistants may be primitive by comparison to chatbots, but it’s far too early to count Apple, Google, or Amazon out or declare the race over, for that matter.
“Siri” and “Alexa” are just marketing names. The underlying technologies can change. It is naïve to think Google is not working to integrate something like the Bard system into its Assistant. I have no idea if any of these companies will be able to iterate as quickly as OpenAI has been doing — I have been wrong about this before — but to count them out now, mere months after ChatGPT’s launch, is ridiculous, especially as Siri, alone, is in a billion pockets.
The VCs are flipping their position on government bailouts after spending years arguing that they shouldn’t be constrained by any regulations or rules, financial or otherwise. People, both with and without a historical understanding of the marketplace, are understandably upset when they perceive a clear and certian statement that for all the talk of “wealth generation” and the moral burden of taking loans and running a buisness that VCs have done over the years these same self-identified gurus clearly feel no particular ethical requirements themselves. Apparently the moral hazard of the market is for us lesser beings, not the startups and their funders.
I spotted a few of the links in my last post in Zucker-Scharff’s running feed of related topics. This original piece is great, too.
This weekend’s news of the second- and third-largest bank failures in U.S. history has been hard for me to process. I live somewhere that has not had a bank go bust since the mid-1990s, so what happened over the past several days is entirely unfamiliar to me. If you feel similarly confused, I have some links which helped me understand it better, and may hopefully do the same for you.
[Silicon Valley Bank] serves about half of all venture-backed US tech and life sciences companies and has total assets worth $212bn, making it the 16th-largest bank in the US. Founded 40 years ago, it has grown into a fixture in global tech, having banked groups such as Cisco, Ring, Beyond Meat and Shopify in their earliest stages.
It is being rocked as tech start-ups face the biggest collapse in their value since the dotcom bubble burst in the early 2000s. SVB’s market capitalisation has fallen from a peak of more than $44bn less than two years ago to $17bn today.
Also, I am sorry to be rude, but there is another reason that it is maybe not great to be the Bank of Startups, which is that nobody on Earth is more of a herd animal than Silicon Valley venture capitalists. What you want, as a bank, is a certain amount of diversity among your depositors. If some depositors get spooked and take their money out, and other depositors evaluate your balance sheet and decide things are fine and keep their money in, and lots more depositors keep their money in because they simply don’t pay attention to banking news, then you have a shot at muddling through your problems.
But if all of your depositors are startups with the same handful of venture capitalists on their boards, and all those venture capitalists are competing with each other to Add Value and Be Influencers and Do The Current Thing by calling all their portfolio companies to say “hey, did you hear, everyone’s taking money out of Silicon Valley Bank, you should too,” then all of your depositors will take their money out at the same time.
Silicon Valley Bank was just one of three — holy shit! — U.S.-based banks to become insolvent or announce an impending closure in the same week. Two days before SVB was shuttered, Silvergate Capital said it would begin closing down; two days later, Signature Bank was closed. There seem to be many differences in the reasons for the failure of each; one is that Signature and Silvergate were more involved in cryptocurrency markets than SVB. Put a pin in that; I will return to it after two block quotes.
The reckoning came after the Federal Reserve, Treasury and Federal Deposit Insurance Corporation announced Sunday that they would make sure that all depositors in two large failed banks, Silicon Valley Bank and Signature Bank, were repaid in full. The Fed also announced that it would offer banks loans against their Treasuries and many other asset holdings, treating the securities as though they were worth their original value — even though higher interest rates have eroded the market price of such bonds.
The actions were meant to send a message to America: There is no reason to pull your money out of the banking system, because your deposits are safe and funding is plentiful. The point was to avert a bank run that could tank the financial system and broader economy.
Economics writer Noah Smith made the case against the term “bailout” on similar grounds: “No regular folks will owe any money, SVB no longer exists, and SVB management will lose their jobs.”
Others disagreed, saying the Fed and FDIC were intervening in ways that made clear to banks and large depositors across the country that the government would go beyond any established interpretation of the law to rescue them. This was especially galling, some noted, because SVB and other banks had successfully lobbied to loosen post-2008 regulations intended to prevent similar crashes.
“It is a bailout and it does set a precedent,” Dean Baker, co-director of the Center for Economic and Policy Research, told Semafor. “It is a bit infuriating to see us bailing out Silicon Valley geniuses who couldn’t be bothered to do ten minutes of homework on a bank where they are parking tens of millions of dollars.”
So, the terrible short history of this part of this mess: in 2010, following the Great Recession, U.S. lawmakers passed the Dodd–Frank Act to reduce the risk to world financial markets by more closely regulating U.S. banking and investments. Half of its name comes from U.S. representative Barney Frank who, after leaving Congress, joined the board of Signature Bank, saying “[t]hey don’t get involved with exotic derivatives and credit default swaps”. Interesting point, Rep. Frank. A couple of years later, he helped fellow Democrats come around to a position of weakening some of the regulations he helped write. In an interview Sunday with Bloomberg, he admitted cryptocurrency was “new and […] potentially destabilizing” and today said the closure of Signature was more about sending a message. At the very least, it sure looks conflict-of-interest-y and also pretty shitty.
Whether the Trump administration’s kneecapping of Dodd–Frank, encouraged by one of the bill’s namesake authors himself, played a significant role in these developments has yet to be determined. Committing to any narrative is, at this point, probably a bad idea. But if you are interested in right and wrong framing, an opinion piece at the Wall Street Journal is very committed to being wrong; check out the good alt text in the followup screenshot.
Those are the articles which helped me understand this situation better — though, let me be clear, not entirely comfortably or wholly. Maybe they could similarly help you, particularly if you also live in a place where this sort of thing is uncommon. If you are more financially literate than myself, please let me know if I have written anything stupid here.
Jessie Char posted a great Twitter thread about why it makes sense for apps focused on classical music listening — such as the forthcoming Apple Music Classical — to be separated from apps for streaming pop music. If you are less familiar with classical music and want to try the app when it launches on March 28, this may give you a sense of what to look for.
Kirby Ferguson’s latest is not to be missed: a thoughtful exploration of artificial intelligence within his “Everything is a Remix” framework. Great soundtrack, too.
It is also, Ferguson says, his last video. In an April email to subscribers — which I cannot figure out how to link to — Ferguson says further works will be more likely written rather than videos, owing in part to time constraints. If this is indeed the end of Ferguson’s personal video career, it is a beautiful way to bow out. If you have not checked out his back catalogue, it would be worth your time.
The bill would make Google and Meta compensate news organizations for posting or linking to their work.
A spokesperson for Meta, which owns Facebook and Instagram, said the company is planning to remove Canadians’ access to both written and broadcast news after Bill C-18 becomes law, if changes to the legislation are not made. The tech giant said it would warn Canadians of changes to its services in advance.
Regardless of how you feel about this bill — and I am not fond of it — it is worth remembering Meta used the same negotiating tactic in Australia when its government was working on the same kind of law, which it passed. It has threatened to do the same in the U.S. if Congress moves forward on a similar proposal. The bill may be bad, but Meta’s scare tactics are feeling a little stale.
Samsung has explained how its camera works for pictures of the Moon, and it is what you would probably expect: the camera software has been trained to identify the Moon and, because it is such a predictable object, it can reliably infer details which are not actually present. Whether these images and others like them are enhanced or generated seems increasingly like a distinction without a difference in a world where the most popular cameras rely heavily on computational power to make images better than what the optics are capable of. Also, these Moon pictures sure seem like a gimmick — how many mediocre pictures need to be posted to Instagram of the Moon with nothing else in the frame?
If it sounds like I am being flippant, it is only because I think this becomes more worrisome if it moves away from marketing stunts, as it seems likely to. In the case of these Moon pictures, it seems pretty clear to me that Samsung’s camera software is mapping known detail onto a known object. It is untruthful, but not too impactful. What happens as the categories of known scene types are expanded? As I wrote last month, there were already problems with a simple compression algorithm used in Xerox copiers, which ended up subtly changing numbers in documents. Without getting into FUD territory, imagine those sorts of errors or assumptions in photography, and with the complexity of a machine learning black box in the capture pipeline.
The amount that phone scammers have stolen from Albertans has nearly doubled compared to two years prior, mirroring a national trend with fewer victims, but millions of dollars lost.
Data from the Canadian Anti-Fraud Centre shows that in 2022 there were 849 reported victims of scam calls in Alberta, totaling more than $5.4 million.
In 2021, 757 Albertans lost $3.4 million and in 2020, 927 people lost $2.1 million. The data relies on what was reported to the centre.
The federal government says around $530 million was lost to scams of all types across Canada in 2022, 40% more than in the year prior which was a 130% increase over 2020. While I leave open the possibility the RCMP is overestimating, I have noticed an increase in the number of scam calls I have been receiving, often multiple per day. I ask them if they are okay with how much they are stealing from people who do not know better, and they often say yes. Sometimes, I think about how sad this is for everyone involved.