Reference

When Anything Can Happen, Nothing Matters

Film theory: an explanation for why so many overproduced movies are emotionally unsatisfying

If you've ever watched a movie where the climax was approaching and the story just started getting too big for its britches, and the filmmakers kept adding ridiculous twists and turns, and upping the ante, and instead of feeling like the movie was getting more interesting, instead it became harder to suspend disbelief and the story started to feel disconnecting, or you began wondering how much longer the movie will take, how much more climatic this already hyped-up scene will get... congratulations, you have experienced what I term,

"When Anything Can Happen, Nothing Matters."

When done right, the action in a story feels personal to individual characters, and when we know the stakes (and the stakes are not just "lots of empty emotionless buildings will get smashed, the whole city is under threat!") then we can feel empathy for specific characters, and we are drawn into the narrative. When the film-makers keep "pulling back" and doing a lot of what Brad Bird called "God shots" with the camera where we are looking down on the action from above, because the spectacle is so vast we have to take the mile-high view, literally, when the focus shifts to spectacle over character, then we are at risk of losing track of why we should care.

A list of movies that suffer somewhat from this problem:

  • Incredibles 2 (literally trying to save the city)
  • Despicable Me 3 (really fun movie with a trumped up, silly, even boring ending)
  • Minions (story gets too big, literally)
  • Penguins of Madagascar (save the city, save everyone)
  • Home (2015 DreamWorks film) - save the planet
  • The Hobbit (fun story, fun story, war campaign, fun story, fun story)
  • The Avengers (2012 film - save the city, punch a building, nuclear threats, endless outsized horrors, and expanding good-guy powers to counter the horrors)

There are many other examples.

An example of a franchise that makes clear rules, in order to avoid this problem, is the Harry Potter books and films. The rules are there to make sure that the writer and reader know that the rules cannot just change at the drop of a hat. Something is at stake. We will not feel emotionally manipulated: "Oh know! Oh know! It's about to get terrible! ... just kidding, the good guys had this, the whole time!" No one wants that.

Note that when this rule is knowingly broken, and when new rules or weird backstories are introduced, but the writers dig in, and explore the ramifications, it can create tension and fun stories, for example the series Adventure Time, and Rick and Morty, which are truly bananas at times, but the stories explore that craziness instead of pouring it on thick and then magically erasing the craziness a few minutes later.

Note also that superhero stories can feel connecting when individual characters are vulnerable, and we can relate to them and imagine ourselves in their shoes. Spider-man and Batman are two examples. Spider-man allows us to wonder what we would do with new powers. And he is always getting banged up and hurt. He struggles even to understand his powers at times. That aspect of a superhero story is relatable and very human. Batman is even better, because he is the ultimate self-made superhero. He created his own superpowers with his intelligence, his gadgets, his study of combat. And he is still very vulnerable. We worry about his fate because he is so human (he is literally not superhuman like Thor or Superman). The opposite is very disconnecting, the struggles of titanic forces against each other: evil gods and good gods fighting at an inhumanely large scale.


Apple

Just a VR Headset?

Or, No True AR Headset Fallacy

Apple has sent review units to a handful of reviewers, who have had a few days to create videos and write reviews of the Apple Vision Pro, before the general public gets their hands on the device in a few days.

Irksome comments by otherwise intelligent people

Sam Kohl and John Prosser react to these reviews and have a healthy back and forth about the product, based on the earliest public info that hasn't been filtered through Apple:

These two close Apple watchers offer insightful push-back, which is healthy in the sense that when we try to prognosticate about the future of a product, we need to understand things from every angle and as they really are.

However, John repeats something Nilay Patel says in The Verge review, which is that Apple's headset "is just a VR headset." What Nilay is saying is that he has historical context for Apple's headset because he has been using and reviewing such VR headsets for years now, and following the technology closely. He obviously knows what he is talking about, from a hardware point of view. He speaks fairly about the limitations of the device, "magic, until it is not." Everything he explains is based on a close examination of the device itself. He is careful about the details.

However, I wonder if Nilay’s comment that Vision Pro is "just a VR headset" will go down about as well as CmdrTaco regarding the iPod: "No wireless. Less space than a nomad. Lame."

John Prosser repeats Nilay's line that Apple Vision Pro is just a VR headset, and he mentions that Facebook could have made the Apple Vision Pro if they were willing to charge customers $3,500, then they too could have put better outward facing cameras and better displays in front of the wearer's eyes, and then that imaginary headset would be very similar to Apple's shipping Vision Pro, perhaps better?

The Ship(s) of Theseus

The mistake these reviewers are making is just a twist on the old philosophical problem of identity, famously explained as the thought experiment of the Ship of Theseus. We are asked to consider Theseus on his seafaring journeys, and his crew needing to replace pieces of their ship. Gradually they repair and fix the ship over many years, until every piece of the ship has been replaced. Is this ship the same ship as the original ship? When did it cease being that “same” ship? What if we take all of the pieces that were replaced and collect them up, and rebuild a run-down version of the original ship? Now we have two ships. Which ship is the real Ship of Theseus?

One solution to the problem is to use a different, functional definition of identity, instead of some nominal concept. (In other words, nominally, the only "real" ship of Theseus was the first one, then it stopped being the Ship of Theseus once a single change was made, in fact once it hit the water and started to weather. This is one solution to the thought experiment, but it doesn't match our intuition.) A functional definition might be: that the ship of Theseus is whichever ship takes Theseus and his crew on their adventures. The parts of the ship relate to one another functionally, and as long enough parts of the ship are functioning together as a ship, then it can be considered the ship of Theseus. We hold identity lightly. Any actual sailing ship that Theseus and his crew sail on can be considered the ship of Theseus.

So, applied to headsets, a functional definition (that is less reductive regarding just examining the hardware) might say that any headset that can do augmented reality things and mixed reality things is not “just a VR headset.” (Virtual Reality becomes an immersive, surrounding software experience, not a category of hardware.)

The Book of Face

The fallacy is that Facebook or other manufacturers could have replaced each component of their headsets with better spec'ed components, one by one, until they had arrived at the Apple Vision Pro. Then those vendors could have spent 2024 getting developers to port their apps to Vision Pro, built a headset platform that goes beyond just Virtual Reality games experiences and entertainment, then eaten Apple's lunch.

This may be factual (debatable) and superficially convincing, but still deeply wrong in some sense, because it ignores reality: the fact is Facebook

  • "could have" pushed the state of the art harder;
  • could have produced their own silicon;
  • could have hired better industrial designers instead of letting the skunkworks Vision Pro group at Apple hire them for a more exciting project;
  • could have treated pass-through (reproducing the world around you) as more of a core requirement to create AR / mixed reality in a single headset, instead of an afterthought;
  • could have watched Oblong and John Underkoffler pioneer spatial computing between his 2010 TED talk and when the Apple headset project really got going (Oblong hired a handful of ex-Apple individuals who returned back to Apple after a few years at Oblong);
  • and Facebook could have tried to ship a phone (oh wait, ten years ago they tried) and tablet platform so there would be thousands of tablet apps and phone apps to bring over to their headset—

Facebook could have done all these counter-to-their-culture things, but the fact is: they didn't. That timeline is not our timeline. And the same goes for Samsung, Sony, Microsoft and Amazon.

I call this lying with facts. You throw out narrow little facts which can all be verified to be true, but you ignore other big obvious things that show that the facts don't come together to imply the conclusion you claim they do. Otherwise very intelligent people fall for this all the time because they don't step back and look at the big picture. This kind of misunderstanding is a categorical mis-attribution:

  • The automobile is just a horseless carriage, sans having to feed the horse and scoop up mountains of horse manure, and the horse dies one day, and you can't replace the horse's leg if it breaks, the way you can replace a tire, you have to shoot the horse.
  • Human beings are just wimpy greats apes with long legs and no fur and a bigger cranium; if you added a bigger brain and reason and art and mathematics, and culture and imagination, and religion and architecture and music and taxes and martial law and graveyards, and writing systems and ten thousand years of hard-won knowledge, and the scientific method, which was fought against by religion for centuries, and the Enlightenment, and capitalism—if you just add this to gorillas and bonobos, you would get humans.

These statements are tantalizingly, superficially true, but they miss the heart of identity: when A + B = C, the differences listed in B are what make A and C so different. B is so vast, it doesn't bring A and C together, it pushes them apart.

Apple's headset is clearly very different from the existing headsets on the market in so many already remarked-upon ways, especially price. These pundits undermine their own point by saying that Apple's screens and outward facing cameras are light-years ahead of their competitors. That's the starting point you need to create an entirely different first-hand experience for people who are not full-time VR headset reviewers. There is some line crossed that Apple sees, that normal people see, that VR headset reviewers and manufactures cannot see. To justify purchasing a headset, it needs to do more than just VR, it needs to be more than just games and entertainment. Then it becomes much more useful than just a game console. Then it justifies a higher price too. (Or this is all Apple's hope.)

What makes this all the more infuriating is that Apple has done this so many times before, especially with Macintosh, iPod, and iPhone. (And essentially zero competitors have yet created a viable truly portable tablet platform to compete with iPad—a platform that feels mobile-first, with ten-hour battery life and every app for the platform feeling native. Windows tablets do not satisfy this criterion for battery and legacy (touch target) reasons). Again, this doesn't make the new visionOS platform a shoo-in, but it does mean that Apple gets to lead the technology industry for a lot of reasons, not just their widely feared faithful fan-base; the main reason being vision (pun intended). It just seems that the industry keeps waiting for Apple to come in and show the way, do something initially counter-intuitive but groundbreaking, and then all the vendors will race to catch up and iterate. But somehow only Apple seems capable of this step-function way of thinking.

And Nilay Patel, and other smart pundits keep highlighting this type of narrow thinking with their lack of imagination, trying shove a square new peg into a round old hole. I'm not the one putting words in his mouth, Nilay is really trying to do this! These pundits (Nilay and John) are supposed to be the ones with the perspective to understand what is really going on, but they seem to not see the forest, just trees, very close up, in great detail, perhaps just bark (as I said, they are deep into the details), based on these sort of farcical comments relegating a very different headset to "just VR" when it is clearly designed with such a broader vision in mind, with significantly more capability, in both hardware and software. It's just so uncanny how predictable some pundits are, when they try to pretend to be contrarian for the sake of coming across as objective. It makes them seem anything but! Again, the iPhone shipped in 2007 and tons of smart people publicly missed what was right in front of them, for several years, until it was obvious to everyone. I still wonder, why didn't everyone get rich from investing in Apple after the iPhone was publicly announced? The stock price stayed very very inexpensive for several years, until at least 2008 or 2009, and then it went up like 50 times or 60 times since then!

Is Software King or Is Hardware King?

I think these tech pundits and reviewers also suffer from an annoying form of reductionism or dualism: they see Apple's products more as pieces of nice hardware where the software is a necessary and limiting evil, often the type to lament that they cannot hack the hardware to run their own software, the way commodity PC hardware can run Windows or Linux.

Apple does not and has never seen it this way. Apple starts with the software experience they want to ship, then works backwards from the software to the hardware, then works for like seven years to create the hardware that can enable the experience they want, then they ship the complete package when it's ready. Apple knows their users see the hardware as a necessary evil. Think about it: every piece of hardware that Apple ships has always carried all or most of the downsides or costs (to their users) for their products. (And the software limits and obvious shortcomings are lifted gradually through annual updates, until OS updates become kind of boring fifteen years later).

Travel back in time, and show a cave man your iPhone. Tell him, "Look, iPhone software is amazing! Look at all the many things the software can do! Sadly you must carry around this expensive heavy device in your pocket, with limited battery life, but it is a price worth paying to get access to those amazing software experiences!" Same with the Vision Pro. Apple knows their users kind of don't care about the hardware (inside the case), especially not the specs, which reviewers so unduly focused on. Users only use software, not hardware. Hardware is the price we all pay to use software.

In the 1980s and 1990s, Apple used to have posters on the walls of the product teams' offices that said "Software Sells Systems." But I feel strongly that Apple is at least as software-first, if not more so, these days. From the user's point of view, the experience is so software-centric and the hardware so deliberately minimal, it's hard to argue otherwise. (Yes this is reductive; Apple is not dualistic about this; they design and build products as complete experiences, with their development structured with functionally cross-cutting teams, and not divisional as other companies might do, with Microsoftian fiefdoms.)

It's all about the software, stupid

Any third-party software (that is not Facebook-owned and thus exclusive to Oculus) can be ported to Apple Vision Pro (seated experiences only, based on Apple's documentation and VR experience safety issues). However, probably more than half of the software that developers ship on Apple Vision Pro (especially the rectangle window software, coming over from the iPad, something there is not a lot of on other headset platforms) will not be something that can be ported to Oculus, because it assumes a world of many apps running, many windows, or new experiences that assume a background of the real world with a baseline elaborate (and expensive) pass-through. If new AR experiences created for Apple's headset cannot be ported to any existing VR headset—in principle, in spite of developers wanting to do so (because for example they assume you can cook in your kitchen with the headset on, for example)—how does that leave any doubt that Apple Vision Pro is not just a VR headset?

It's so weird to call out intelligent pundits for saying something irksome and thoughtless, to have to even explain the basics of how words work. If a headset is used for augmented reality and mixed reality activities, and sometimes virtual reality (fully immersive activities) then how is this "just a VR headset?" Yes Nilay is smart and understands all the specs of the hardware, this is just a high-specced VR headset! But my five year-old would say, wouldn't headsets that are used for mostly VR experiences be "just VR headsets?" But headsets that are mostly used for other things be other-thing headsets? The mind boggles when words cease to work as expected.

This means that this is not just a VR headset. It's a platform. Yes, Apple is achieving mixed reality and even a few augmented reality features by cheating and using pass-through ("no true Augmented Reality fallacy"), but they are shipping something real that developers can create apps for, now, not some future glasses that don't exist. Once all this 2024 software exists, the platform will have much more momentum than any other headset. By the time other vendors catch up to the 2024 Vision Pro, four to six years from now, Apple will be shipping the newest 2028 or 2030 Vision Pro or Vision Air product.

I get the feeling that Nilay Patel and other reviewers and pundits are kind of just annoyed that (1) they didn't see it coming (see my article where I elaborated on this, months before any Apple announcement in 2023, and about a year before the product shipped) and (2) Apple pulled off a trick that no one has done: using VR to "fake" AR and mixed reality. It's like these pundits want to grab people and yell in their face, "This is just a trick, this is just VR warmed over; don't fall for it! It's not true AR! The windows aren't really floating in your room!" Like OK, you are trying so hard to be right, but who cares? Again, it just feels like some sort of "no true AR" fallacy. Here is a gold star, you are technically right, but your words are meaningless; congratulations, you broke the English language. Pat on the back.

Keep it up pundits; keep misunderstanding leaps in technology even after Apple publicly explains it thoroughly for the sake of trying to seem objective, so by the time it is too late to go back and understand it in real time, Apple's stock will be up too high, and people will think, oops, we missed that train. Without pundits confusing everyone and causing misunderstanding, perhaps competitors would see things more clearly too, and actually give Apple a run for their money, instead leaving Apple a clear runway, every time.

On Releasing Flawed Products

One last comment: I don’t mean to imply that the commentators are not pointing out legitimate flaws with the product. It is clear that the hardware needs improvement, and that Apple oversold certain features.

There are only two types of products: those that are released too soon, and those that never ship at all.

However I think that industry watchers keep forgetting that this keeps happening every time a product is released: Apple Watch, which iterated in public because they needed to learn how people use it; iPhone, because a 1.0 product, however flawed, was better than waiting another five years to ship a “perfect” version of the device; iPad, with the 1.0 version seeming like an aberration in terms of thickness compared to even iPad 2, which feels much more like the iPad we are used to. A perfect device is a boring device, and that only occurs when the platform is more mature, which by definition only occurs when developers can ship apps to users on hardware that can be purchased by the public.

On missing competitors' apps

The three missing third-party apps that keep being repeatedly mentioned are: YouTube, Spotify, and Netflix.

I think Apple may be genuinely annoyed that there is not a native YouTube app for Vision Pro that works better with the right-sized controls. Because Apple has no competing user-uploaded video social media service. However I think Apple is not at all annoyed that Netflix and Spotify are not on the platform yet. I think they know that their customers who can shell out nearly $4,000 for a 1.0 product can afford $20 a month to test music and video out using Apple's competing services: Apple TV+ and Apple Music. You might even say that TV+ as a service was probably started long ago with visionOS in mind, to make sure Apple could control the availability of high-quality original content for this (then upcoming) hardware play. And the Disney+ app shipping not just natively—but with the device—is no accident.

Again, I think Apple is happy to see Spotify treat themselves as irrelevant on this platform, at least at this point. Apple can control everything about the experience with their native Apple Music app. I'm sure they see allowing Spotify on their platforms, at all, as a cost of doing business, as redundant and lesser. I'm not saying they don't see customers wanting it there, but I think from Apple's point of view, they think, just cancel your Spotify subscription and use that money for Apple Music, and you are set. It sounds cold, but can you imagine a Steve Jobs or Eddie Cue email where they act annoyed that they have to let Spotify on the platform at all to appease regulators? I can! The iPhone had no native apps for a year. And even then they rejected apps for reproducing existing functionality (Calendar, Email), until at some point they let up on this early rule. Apple can be annoyingly opinionated and controlling, surprise!

Conclusion

This does not mean I think this category is a shoo-in, and I don't think this means that any success of Vision / future hardware / visionOS platform will be as large of a success as iPhone, probably not even close. But I think visionOS could create a business that complements the Mac and iPad and even subsumes and grows that core productivity computing market. I think Mac, iPad, and Vision will be the nucleus of their productivity and creativity platform, which also does include content consumption. And I think that by the time the platform has gotten its legs, and the price has dropped (some?) and battery life is better, and weight has been reduced—by that point it will be hard for competitors to catch up. By the time it becomes obvious that Apple has a clever approach, or a "now it's so obvious" approach, it will be too late to benefit from that knowledge, and Vision Pro will be to other headsets what iPhone is to other smartphones: the main attraction, with the best software. (What new software product launches Android-only before it launches iPhone-only? I don't mean emulators and utilities and stuff like that, I mean mainstream, large, successful businesses. What Android-only high-quality tablet software is there? iPad has tons of this stuff.)


Reference

The Truth About Tonewood

One of the most annoying and stupid phrases discussed in the Electric Guitar Gear Enthusiast World (mostly dominated by men or really young men with enough time to be worked up by such things) is the idea of “tonewood.” Supposedly electric guitars made of different kinds of wood sound perceivably different when played through a guitar amp. This has become an ideology and a supposed, raging internet “debate” (there is no debate; one side repeats actual facts and asks for evidence, and the other repeats thought-stopping catch phrases). But no one lists all the facts in one place, while still acknowledging a few valid nuances.

And the artists who work with instrument makers on signature models don’t want to upset their revenue stream, so they repeat the party line that tonewood is worth paying for, even when this can be easily demonstrated not to make sense, based on the instrument makers’ own claims and marketing (see the very last point especially)!

My thinking on tonewood has become ever-so-slightly more nuanced with a few key facts I learned, and some historical context.

  1. Historically, before electricity, tonewood has always been a real thing for instrument builders for centuries, not just guitars: oboes, clarinets, violins. Any instrument made of mostly wood. This explains where the myth’s foundation in actual truth came from: about two or three centuries ago, when wooden instruments made tones using only or mostly wood, not other means (electromagnetism).
  2. Of course tonewood choices affect the tone of 100% acoustic guitars.
  3. Choice of tonewood also definitely affects the weight of solid body guitars, which may affect the sustain, in the sense that a really crummy, too-light guitar might vibrate the body instead of staying in place (unlike the ideal immovable heavy benches in Jim Lill’s video about the electric guitar he built using a workbench and an air gap).
  4. Wood choice probably affects the tone of fully hollow-body electric guitars with electromagnetic pickups, like early Gibson ES guitars. Wood choice likely also affects the tone of semi-hollow-body guitars with EM pickups, or at least the sustain of the notes. If not, then why are semi- or fully hollow bodies with EM pickups even sold?
  5. Historically some wound electromagnetic pickups existed and still exist that are non-potted, meaning they are not dipped in wax at the back, to muffle vibrations from the body, and they are somewhat microphonic. An example is a Gibson Les Paul with PAF pickups. When you connect such a solid body guitar to an amp and turn it up, and then mute the strings with the left hand and tap the body near the bridge with the right hand, you can hear a tap tap sound coming out of the amp. So if the strings are not vibrating and the pickups make a sound then the tone of the tapping sound they make is affected by the wood and weight of the guitar. (Perhaps one can even hear this sound by tapping near the pickups when the strings have been removed.) That all sounds plausible. However, it is not clear how much this is covered up by the string signal when the strings are vibrating, or if there is a measurable connection between body vibrations and string vibrations. I would believe it when people do a controlled experiment. But I am open to that possibility because it is plausible.

This much of tonewood is a real phenomenon.

However,

  1. Inasmuch as a set of pickups are not microphonic whatsoever (if this is possible), the wood itself is transparent to EM radiation and cannot affect the current in the wires going to the amp unless the solid wood affects the vibration of the strings at the bridge and nut (or fret, or glass or metal slide). This seems unlikely as demonstrated by Jim Lill’s detailed video. Especially considering the much larger effects of:
  2. Pick shape, size, material, vs. fingers touching the strings (the debate in classical nylon-string guitar circles about playing with the nails or with the flesh of the fingers is demonstrative); string material, string age, string gauge, scale length / tension; pickup placement along the string, pickup height from string, pickup design and construction (which has a huge and measurable affect on the tone, which is not really in question by anyone); bridge design and materials; nut design and materials; fret materials and height and shape.
  3. However, a lot of these effects (besides pickups and scale length) pale in comparison to drive pedals, preamp design and tone stack of the amplifier or pedals, and especially the part that makes the sound and moves actual air: the cabinet, speakers, and microphone. These parts ship with frequency response curves because the effects on the signal can be easily measured. As Glenn Fricker points out, tonewood doesn’t ship with frequency response curves. Even if tonewood affects tone, it is such a small effect as to be completely buried by the other factors listed above. Unless you have a guitar that is hollow (or you have somewhat microphonic wound electromagnetic pickups, but only maybe and even then only slightly).
  4. There is no way in hell fretboard materials can affect tone on solid-body guitars because the fretboard does not even touch the strings. I would bet a lot of money that no blindfolded luthier can beat a deaf person (who can cheat and use their eyes) and tell apart by ear (more than chance) two guitars that have been constructed perfectly the same and only vary in fingerboard material. (Assuming the luthier cannot tell by touching the fingerboard while blindfolded or building in some touch-based method of identification.) No way in hell. (Especially under high gain.)
  5. I would concede that a luthier or high-quality manufacturer could claim to be unable to make two identical guitars from the same materials that could be confused by ear. They may claim that variations between the two guitars may be perceptible to the blind. In which case, I would claim that tonewood fretboards are even less likely to produce measurable tone differences if individual guitars with the same materials and manufacturing already have that much variation. Variations attributed to fretboard materials would more likely be explained by this kind of natural variation.
  6. I am willing to believe that for a fretless guitar or bass, the material of the fingerboard affects the tone, but not as much as other factors mentioned, or round-wound vs. flat-wound strings, where the fingers are placed away from the bridge, how hard the strings are struck, amplifier and cabinet, etc.
  7. On the whole, tonewood selection only matters in solid body guitars if you care about: weight, aesthetics (assuming not painted), resale value. But not for the actual sound. Just buy different pickups, change strings, scale-length, number of frets and hence pickup placement, pickup height, etc. If you are buying a painted body guitar especially, choose the wood based on weight and price and ergonomics and comfort, not any other factor (taking into account the caveat about not having a too-light guitar that does not stay in place properly).
  8. Why are cheap solid-body guitars with more pieces in the body (3, 4) considered a negative compared to “nicer,” more expensive one or two-piece solid bodies, but fancy 11- and 5- and 3- piece necks are considered a good thing, but a one-piece neck is considered lower end? This makes no sense unless the neck materials have < 1% effect on tone. If that is the case, then multiple pieces in the body also has < 1% effect on tone. And the choice of fretboard wood probably also has less effect then the entire neck (since neither the neck nor the fretboard touch the strings), which is probably also < 1% effect on tone.
  9. The difference in prices for fancy multi-piece necks is because of aesthetics and not tone. This paradox of “more body pieces bad” / “more neck pieces good” seems like a hokey, arbitrary tonewood myth, which is obvious guitar maker marketing shining through. Aesthetics, premium materials and higher labor costs for higher prices makes sense, but conflating wood aesthetics with tone when the effects cannot be measured (or rather, when the effects can be measured and are shown to be nonexistent or negligible) could be considered unethical. (And it just so happens that the supposed tone differences are that maple fingerboards (a light wood) sound bright and rosewood or ebony (dark woods) are described as sounding dark. Mmm Hmm.)

“It is difficult to get a man (luthiers) to understand something, when his salary depends on his not understanding it.” (Upton Sinclair) For centuries, before electricity, luthiers were right that tonewood mattered. However, they stopped caring if it still mattered when the tone was not created acoustically any more but was created electromagnetically. Tonewood mostly doesn’t matter any more for the reasons they claim it does.


Design

Plague of Plagiarism

Matt Gemmell, esteemed independent fiction author and former technologist and pundit, offers a seemingly hardline stance against using new tools that never existed before for serious creative work. He makes the point that many of us already know that we are wary of claiming we “painted” or “wrote” something when we sort of only cobbled it, or built it, or stumbled onto it, or tweaked our way towards it.

He has my number. When I produce artwork using these novel tools, I call it modding or salvaging and I call myself an “ainter” as in an “AI painter” and “I ain’t a painter.” I produce images or aintings as a hobby because it is enjoyable, and I don’t give a damn if I don’t have other serious grown-up approval. But I am on the same side as Matt on this probably because I never grew up in a world where these tools just simply always existed, and for obvious reasons I will never pretend I can produce an entire 9k digital painting—at the current level of quality and rapidity that I produce them—without the aid of AI tools. No one who knows me would believe me anyway, without evidence that I could do it laboriously by traditional means.

As an aside, I have actually developed my eye and my painting ability significantly while working with these tools, because sometimes traditional means of digital painting are just much more rapid ways of communicating what I want to the machine. And the machine screws things up a lot. A lot. Also, I am certainly not hiring models and working from life sketches or photographs to produce these images. But that doesn’t mean I cannot or have not done this. I think I have significant skills in this area (based on being at or near the front of the class in school in a life drawing class) and the assumption that someone who uses an AI art tool probably cannot draw is folly. (Mr. Gemmell does not claim this, but it seems sort of heavily implied by calling use of the tools “automated plagiarism.” It is nothing if not meant to belittle and warn away from developing skills at using these powerful new tools.)

I Know I Didn’t Produce the Entire Image Myself

But so what? Many artists cannot draw something from their mind’s eye with no reference, or at least we know that the results using reference are ten times better. Some artists can, but most cobble together reference into a new image, as a matter of course. It’s how they work. They may do a web Image Search and carefully compose and remix, and it is not currently considered plagiarism, especially when the artist puts together enough pieces to create something “new.” But what are they doing if they cheat and draw pictures of the sea stacks of Iceland if they haven’t been to Iceland themselves? Is it really plagiarism to avail themselves of such a powerful reference tool as web Image Search? And why aren’t the hardliners speaking out against this borderline plagiarism? Is it because we have had this one particular tool for a few years and it is boring to talk about? Or is it because there is room for nuance in discussions about creativity, because web Image Search can still be used both for plagiarism, and for novel creative work?

So now we live in an era when a machine can do the remixing very quickly for us, using billions of reference images, and can produce handfuls of infinitely varying, synthesized reference images for us, and we see one and say “that’s what I was looking for!" Now suddenly, when our eyes are filled with an image that no human has ever seen before, it is suddenly plagiarism? I think that makes no sense.

Every book is a quotation, and every house is a quotation out of all forests and mines and stone quarries, and every man is a quotation from all his ancestors. — Ralph Waldo Emerson.

Somehow human beings know there are billions of us on this earth, yet we still think we are special and that each creative work is some new new thing, when deep down every human knows they are not an island, and every thriller writer is not a plagiarist just because they didn’t invent an entire new genre, and their own work must be cobbled together from many other things, even if they can tell you their influences. Somehow humans keep repeatedly forgetting that we have known for thousands of years that “there is nothing new under the sun,” (Ecclesiastes) that “everything is a remix.” (Kirby Ferguson)

Gemmell’s article uses this hesitancy to underline my own wariness at any claims to the entire final work. I have been stewing over this, wondering sometimes about creating a lot of really interesting work that I have miraculously made seemingly much too easily, but actually still even laboriously (why am I pouring hours into this again?), but I don’t think it is entirely an accurate approach to the question.

No Old Answers to New Questions

AI art tools are genuinely new. And many people (students) will likely try to turn in work that was knocked off with no effort. The teacher is assigning a pointless essay to be written at home instead of monitoring students in a proctored test environment. Plagiarism is and already was a problem to be dealt with. Cheating and impersonation has been here longer than AI. And AI will not help make this better. But teachers could invent new ways to get students to want to write, by picking topics that matter more to students. (Or perhaps there will be ways to use AI tools to generate unique tests for each student, so that cheating might become impossible, with the difficulty level precisely calibrated by AI. Thus AI tools may be the solution to the AI tool problem.)

We know all this. But the handwringing and doom and gloom about new and exciting and scary tools overshadow any possible benefit, to some. Honestly, I am shocked that people are not more excited about the possibilities. I am shocked that otherwise technically capable people who love technology and consider themselves technophiles haven’t seen the creative ways the tools remix endless pieces of ideas in genuinely astonishing and creative and beautiful ways, instead relegating it to some parlor trick, some passing fad.

I would not have pegged Mr. Gemmell as a Luddite and I don’t think he is even entirely wrong. But I think many people are taking sides on the uses of these new AI art and AI writing tools without having seriously attempted to use them. Anyone who actually tries will quickly learn that the results tend to be unreliable at best. People who have spent months (probably no one has spent more than about 18 months using these tools seriously as of April 2023) getting good at this stuff may be good at making it look easy. But reliably finding and improving great images is not as automatic as many non-practitioners might think. My process to create high-quality high-resolution poster-sized images is involved and convoluted, refined from months of experimentation. And the novel styles created by some AI art practitioners who plumb the tools’ depths lack any reference to any living human artist. These people are homesteading new frontiers.

I think the framing that most people who use these tools casually can only create new things by “automated plagiarism” is not quite the right framing, but I would not hesitate to call it “automated fan art” or “automated remixing.” I think remixing enough influences (not just aping one angle), and finding and refining genuine gems that are coined from random input noise is not actually as trivial as plagiarism. In other words, what makes the work into art is not just the effort, and it’s not just the result, and it’s not just luck. It is some combination of the three. No matter what the tools. But when a skilled practitioner is able to go beyond luck and reliably produce high quality results with significant effort, who are we to say it cannot be called art? Why write yet another damn fiction novel in a world with 129 million ISBNs in 50 years? Because writers can’t not write.

Indeed, the drive to make art is nearly universal. Better tools have always made this easier. I’m sure someone has told students, “it’s not real oil painting if you don’t crush your own pigments.” But many oil painters would call this pointless snobbery. It’s only slightly different from any other kind of gatekeeping of creative work based on just the tools. Instead I would not advocate gatekeeping someone else’s art. Were I to allow it, my gatekeeping would be based on (a) results, (b) the urge to create, (c) the spark and joy of discovery, the muse, even the Fortunes and the Fates, the random noise, the contingency and unpredictability of it all, and (d) the effort, the polishing, the tweaking, the curating, the iteration, the shoulder to the grindstone. How is any such process not just good old-fashioned capital-A Art?

Art has always been about the grind, and then stepping back and sharing just the best. It’s about vision and purpose, about having something to say. It’s editing, it’s inspiration, it’s iteration. And then another round of selection. I think the curating of one’s own work by a prolific photographer (including polishing and tweaking what came out of that automated picture-making machine) is actually not that different to curating a few gems from hundreds of fantastic AI creation finds. I think the creative impulse and the keeping-at-it process using new AI tools is not as different to what a traditional artist could be spending their time doing. Those who proclaim otherwise seem to me to project their ignorance, fear and snobbery.

Working Through a Slow Human Team vs. Working Through a High-Speed Robot Team

Finally I also think that a simple rewording of his argument shows how it can fall apart. We will take the example of the late work of architect Frank Lloyd Wright. He worked through a team of students and understudies and is still credited with his later buildings (students may have received some credit; they executed all the drawings). But according to Mr. Gemmell’s narrow definition of authorship, and taking the structure of his argument:

When people invoke [a team of underlings] to generate something, they often still use the language of endeavour: here’s what I created, or made, or built with this [team]. Those words reveal the truth, as words invariably do.

To [direct a team] to generate something for you is not an act of creativity or engineering, because such acts are in the execution, not the idea. On the contrary, it’s automated plagiarism. It doesn’t matter that the originator is a piece of software instead of a person, [or a team of perople]; what matters is that it wasn’t you.

When the end result is built from the works of others, and when the building is also done by an agent other than oneself, there is no legitimate claim of authorship. This is elementary.

This is bullshit. Directors of films and art directors of large teams share credit, but they are not committing plagiarism. I know I am in danger of putting words in Mr. Gemmell’s mouth, but I think the strucutre of his argument is flawed and it is clear that Frank Lloyd Wright working through his students can still be considered an author, an auteur and not a plagiarist.

(“You can’t compare yourself to Frank Lloyd Wright.”)

You don’t have to consider an AI art tool to be an art department, but I don’t think there is anything structurally different about it besides that it’s made of metal and copper and silicon instead of meat and bones and neurons. I think the main arguments against using these novel tools is that they are too new, too scary, too fast, too cheap, and too good at remixing too much reference. They make creativity too easy, so they must be bad.

I disagree. I think poorly written essays or factual inaccuracies parroted by AI chatbots, or poorly realized AI paintings should of course be considered a nuisance, or simply poor art. But I’m not naive enough to think awesome words or awesome images cannot be considered high quality art simply because it was too easy to produce them, or because I used a GPU instead of a room full of underpaid, overworked concept artists working in the video game industry in Los Angeles.

They Are Coming for Me Too

I wonder if those who complain the loudest are simply the most surprised that their area of work is the one most recently automated. I’m a programmer, and AI tools are already coming for me. It’s honestly not something I saw coming two years ago, in 2021. I thought I had a decade or more. But I say, if a younger worker can use the tools to accomplish something at a higher speed and higher quality than I can, then I why do I get to name call to preserve my paycheck? Who cares what I think? Maybe I deserve to be replaced.

Archive