Reference

The Truth About Tonewood

One of the most annoying and stupid phrases discussed in the Electric Guitar Gear Enthusiast World (mostly dominated by men or really young men with enough time to be worked up by such things) is the idea of “tonewood.” Supposedly electric guitars made of different kinds of wood sound perceivably different when played through a guitar amp. This has become an ideology and a supposed, raging internet “debate” (there is no debate; one side repeats actual facts and asks for evidence, and the other repeats thought-stopping catch phrases). But no one lists all the facts in one place, while still acknowledging a few valid nuances.

And the artists who work with instrument makers on signature models don’t want to upset their revenue stream, so they repeat the party line that tonewood is worth paying for, even when this can be easily demonstrated not to make sense, based on the instrument makers’ own claims and marketing (see the very last point especially)!

My thinking on tonewood has become ever-so-slightly more nuanced with a few key facts I learned, and some historical context.

  1. Historically, before electricity, tonewood has always been a real thing for instrument builders for centuries, not just guitars: oboes, clarinets, violins. Any instrument made of mostly wood. This explains where the myth’s foundation in actual truth came from: about two or three centuries ago, when wooden instruments made tones using only or mostly wood, not other means (electromagnetism).
  2. Of course tonewood choices affect the tone of 100% acoustic guitars.
  3. Choice of tonewood also definitely affects the weight of solid body guitars, which may affect the sustain, in the sense that a really crummy, too-light guitar might vibrate the body instead of staying in place (unlike the ideal immovable heavy benches in Jim Lill’s video about the electric guitar he built using a workbench and an air gap).
  4. Wood choice probably affects the tone of fully hollow-body electric guitars with electromagnetic pickups, like early Gibson ES guitars. Wood choice likely also affects the tone of semi-hollow-body guitars with EM pickups, or at least the sustain of the notes. If not, then why are semi- or fully hollow bodies with EM pickups even sold?
  5. Historically some wound electromagnetic pickups existed and still exist that are non-potted, meaning they are not dipped in wax at the back, to muffle vibrations from the body, and they are somewhat microphonic. An example is a Gibson Les Paul with PAF pickups. When you connect such a solid body guitar to an amp and turn it up, and then mute the strings with the left hand and tap the body near the bridge with the right hand, you can hear a tap tap sound coming out of the amp. So if the strings are not vibrating and the pickups make a sound then the tone of the tapping sound they make is affected by the wood and weight of the guitar. That all sounds plausible. However, it is not clear how much this is covered up by the string signal when the strings are vibrating, or if there is a measurable connection between body vibrations and string vibrations. I would believe it when people do a controlled experiment. But I am open to that possibility because it is plausible.

This much of tonewood is a real phenomenon.

However,

  1. Inasmuch as a set of pickups are not microphonic whatsoever (if this is possible), the wood itself is transparent to EM radiation and cannot affect the current in the wires going to the amp unless the solid wood affects the vibration of the strings at the bridge and nut (or fret, or glass or metal slide). This seems unlikely as demonstrated by Jim Lill’s detailed video. Especially considering the much larger effects of:
  2. Pick shape, size, material, vs. fingers touching the strings (the debate in classical nylon-string guitar circles about playing with the nails or with the flesh of the fingers is demonstrative); string material, string age, string gauge, scale length / tension; pickup placement along the string, pickup height from string, pickup design and construction (which has a huge and measurable affect on the tone, which is not really in question by anyone); bridge design and materials; nut design and materials; fret materials and height and shape.
  3. However, a lot of these effects (besides pickups and scale length) pale in comparison to drive pedals, preamp design and tone stack, and especially the part that makes the sound and moves actual air: the cabinet, speakers, and microphone. These parts ship with frequency response curves because the effects on the signal can be easily measured. As Glenn Fricker points out, tonewood doesn’t ship with frequency response curves. Even if tonewood affects tone, it is such a small effect as to be completely buried by the other factors listed above. Unless you have a guitar that is hollow (or you have somewhat microphonic wound electromagnetic pickups, but only maybe and even then only slightly).
  4. There is no way in hell fretboard materials can affect tone on solid-body guitars because the fretboard does not even touch the strings. I would bet a lot of money that no blindfolded luthier can beat a deaf person (who can cheat and use their eyes) and tell apart by ear (more than chance) two guitars that have been constructed perfectly the same and only vary in fingerboard material. (Assuming the luthier cannot tell by touching the fingerboard while blindfolded or building in some touch-based method of identification.) No way in hell. (Especially under high gain.)
  5. I would concede that a luthier or high-quality manufacturer could claim to be unable to make two identical guitars from the same materials that could be confused by ear. They may claim that variations between the two guitars may be perceptible to the blind. In which case, I would claim that tonewood fretboards are even less likely to produce measurable tone differences if individual guitars already have that much variation, even with the same materials and manufacturing. Variations attributed to fretboard materials would more likely be explained by this kind of natural variation.
  6. I am willing to believe that for a fretless guitar or bass, the material of the fingerboard affects the tone, but not as much as other factors mentioned, or round-wound vs. flat-wound strings, where the fingers are placed away from the bridge, how hard the strings are struck, amplifier and cabinet, etc.
  7. On the whole, tonewood selection only matters in solid body guitars if you care about: weight, aesthetics (assuming not painted), resale value. But not for the actual sound. Just buy different pickups, change strings, scale-length, number of frets and hence pickup placement, pickup height, etc. If you are buying a painted body guitar especially, choose the wood based on weight and price and ergonomics and comfort, not any other factor.
  8. Why are cheap solid-body guitars with more pieces in the body (3, 4) considered a negative compared to “nicer,” more expensive one or two-piece solid bodies, but fancy 11- and 5- and 3- piece necks are considered a good thing, but a one-piece neck is considered lower end? This makes no sense unless the neck materials have < 1% effect on tone. If that is the case, then multiple pieces in the body also has < 1% effect on tone. And the choice of fretboard wood probably also has less effect then the entire neck (since neither the neck nor the fretboard touch the strings), which is probably also < 1% effect on tone.
  9. The difference in prices for fancy multi-piece necks is because of aesthetics and not tone. This paradox of “more body pieces bad” / “more neck pieces good” seems like a hokey, arbitrary tonewood myth, which is obvious guitar maker marketing shining through. Aesthetics, premium materials and higher labor costs for higher prices makes sense, but conflating wood aesthetics with tone when the effects cannot be measured (or rather, when the effects can be measured and are shown to be nonexistent or negligible) could be considered unethical.

“It is difficult to get a man (luthiers) to understand something, when his salary depends on his not understanding it.” (Upton Sinclair) For centuries, before electricity, luthiers were right that tonewood mattered. However, they stopped caring if it still mattered when the tone was not created acoustically any more but was created electromagnetically. Tonewood mostly doesn’t matter any more for the reasons they claim it does.


Design

Plague of Plagiarism

Matt Gemmell, esteemed independent fiction author and former technologist and pundit, offers a seemingly hardline stance against using new tools that never existed before for serious creative work. He makes the point that many of us already know that we are wary of claiming we “painted” or “wrote” something when we sort of only cobbled it, or built it, or stumbled onto it, or tweaked our way towards it.

He has my number. When I produce artwork using these novel tools, I call it modding or salvaging and I call myself an “ainter” as in an “AI painter” and “I ain’t a painter.” I produce images or aintings as a hobby because it is enjoyable, and I don’t give a damn if I don’t have other serious grown-up approval. But I am on the same side as Matt on this probably because I never grew up in a world where these tools just simply always existed, and for obvious reasons I will never pretend I can produce an entire 9k digital painting—at the current level of quality and rapidity that I produce them—without the aid of AI tools. No one who knows me would believe me anyway, without evidence that I could do it laboriously by traditional means.

As an aside, I have actually developed my eye and my painting ability significantly while working with these tools, because sometimes traditional means of digital painting are just much more rapid ways of communicating what I want to the machine. And the machine screws things up a lot. A lot. Also, I am certainly not hiring models and working from life sketches or photographs to produce these images. But that doesn’t mean I cannot or have not done this. I think I have significant skills in this area (based on being at or near the front of the class in school in a life drawing class) and the assumption that someone who uses an AI art tool probably cannot draw is folly. (Mr. Gemmell does not claim this, but it seems sort of heavily implied by calling use of the tools “automated plagiarism.” It is nothing if not meant to belittle and warn away from developing skills at using these powerful new tools.)

I Know I Didn’t Produce the Entire Image Myself

But so what? Many artists cannot draw something from their mind’s eye with no reference, or at least we know that the results using reference are ten times better. Some artists can, but most cobble together reference into a new image, as a matter of course. It’s how they work. They may do a web Image Search and carefully compose and remix, and it is not currently considered plagiarism, especially when the artist puts together enough pieces to create something “new.” But what are they doing if they cheat and draw pictures of the sea stacks of Iceland if they haven’t been to Iceland themselves? Is it really plagiarism to avail themselves of such a powerful reference tool as web Image Search? And why aren’t the hardliners speaking out against this borderline plagiarism? Is it because we have had this one particular tool for a few years and it is boring to talk about? Or is it because there is room for nuance in discussions about creativity, because web Image Search can still be used both for plagiarism, and for novel creative work?

So now we live in an era when a machine can do the remixing very quickly for us, using billions of reference images, and can produce handfuls of infinitely varying, synthesized reference images for us, and we see one and say “that’s what I was looking for!" Now suddenly, when our eyes are filled with an image that no human has ever seen before, it is suddenly plagiarism? I think that makes no sense.

Every book is a quotation, and every house is a quotation out of all forests and mines and stone quarries, and every man is a quotation from all his ancestors. — Ralph Waldo Emerson.

Somehow human beings know there are billions of us on this earth, yet we still think we are special and that each creative work is some new new thing, when deep down every human knows they are not an island, and every thriller writer is not a plagiarist just because they didn’t invent an entire new genre, and their own work must be cobbled together from many other things, even if they can tell you their influences. Somehow humans keep repeatedly forgetting that we have known for thousands of years that “there is nothing new under the sun,” (Ecclesiastes) that “everything is a remix.” (Kirby Ferguson)

Gemmell’s article uses this hesitancy to underline my own wariness at any claims to the entire final work. I have been stewing over this, wondering sometimes about creating a lot of really interesting work that I have miraculously made seemingly much too easily, but actually still even laboriously (why am I pouring hours into this again?), but I don’t think it is entirely an accurate approach to the question.

No Old Answers to New Questions

AI art tools are genuinely new. And many people (students) will likely try to turn in work that was knocked off with no effort. The teacher is assigning a pointless essay to be written at home instead of monitoring students in a proctored test environment. Plagiarism is and already was a problem to be dealt with. Cheating and impersonation has been here longer than AI. And AI will not help make this better. But teachers could invent new ways to get students to want to write, by picking topics that matter more to students. (Or perhaps there will be ways to use AI tools to generate unique tests for each student, so that cheating might become impossible, with the difficulty level precisely calibrated by AI. Thus AI tools may be the solution to the AI tool problem.)

We know all this. But the handwringing and doom and gloom about new and exciting and scary tools overshadow any possible benefit, to some. Honestly, I am shocked that people are not more excited about the possibilities. I am shocked that otherwise technically capable people who love technology and consider themselves technophiles haven’t seen the creative ways the tools remix endless pieces of ideas in genuinely astonishing and creative and beautiful ways, instead relegating it to some parlor trick, some passing fad.

I would not have pegged Mr. Gemmell as a Luddite and I don’t think he is even entirely wrong. But I think many people are taking sides on the uses of these new AI art and AI writing tools without having seriously attempted to use them. Anyone who actually tries will quickly learn that the results tend to be unreliable at best. People who have spent months (probably no one has spent more than about 18 months using these tools seriously as of April 2023) getting good at this stuff may be good at making it look easy. But reliably finding and improving great images is not as automatic as many non-practitioners might think. My process to creating high-quality high-resolution poster-sized images is involved and convoluted, refined from months of experimentation. And the novel styles created by some AI art practitioners who plumb the tools’ depths lack any reference to any living human artist. These people are homesteading new frontiers.

I think the framing that most people who use these tools casually can only create new things by “automated plagiarism” is not quite the right framing, but I would not hesitate to call it “automated fan art” or “automated remixing.” I think remixing enough influences (not just aping one angle), and finding and refining genuine gems that are coined from random input noise is not actually as trivial as plagiarism. In other words, what makes the work into art is not just the effort, and it’s not just the result, and it’s not just luck. It is some combination of the three. No matter what the tools. But when a skilled practitioner is able to go beyond luck and reliably produce high quality results with significant effort, who are we to say it cannot be called art? Why write yet another damn fiction novel in a world with 129 million ISBNs in 50 years? Because writers can’t not write.

Indeed, the drive to make art is nearly universal. Better tools have always made this easier. I’m sure someone has told students, “it’s not real oil painting if you don’t crush your own pigments.” But many oil painters would call this pointless snobbery. It’s only slightly different from any other kind of gatekeeping of creative work based on just the tools. Instead I would not advocate gatekeeping someone else’s art. Were I to allow it, my gatekeeping would be based on (a) results, (b) the urge to create, (c) the spark and joy of discovery, the muse, even the Fortunes and the Fates, the random noise, the contingency and unpredictability of it all, and (d) the effort, the polishing, the tweaking, the curating, the iteration, the shoulder to the grindstone. How is any such process not just good old-fashioned capital-A Art?

Art has always been about the grind, and then stepping back and sharing just the best. It’s about vision and purpose, about having something to say. It’s editing, it’s inspiration, it’s iteration. And then another round of selection. I think the curating of one’s own work by a prolific photographer (including polishing and tweaking what came out of that automated picture-making machine) is actually not that different to curating a few gems from hundreds of fantastic AI creation finds. I think the creative impulse and the keeping-at-it process using new AI tools is not as different to what a traditional artist could be spending their time doing. Those who proclaim otherwise seem to me to project their ignorance, fear and snobbery.

Working Through a Slow Human Team vs. Working Through a High-Speed Robot Team

Finally I also think that a simple rewording of his argument shows how it can fall apart. We will take the example of the late work of architect Frank Lloyd Wright. He worked through a team of students and understudies and is still credited with his later buildings (students may have received some credit; they executed all the drawings). But according to Mr. Gemmell’s narrow definition of authorship, and taking the structure of his argument:

When people invoke [a team of underlings] to generate something, they often still use the language of endeavour: here’s what I created, or made, or built with this [team]. Those words reveal the truth, as words invariably do.

To [direct a team] to generate something for you is not an act of creativity or engineering, because such acts are in the execution, not the idea. On the contrary, it’s automated plagiarism. It doesn’t matter that the originator is a piece of software instead of a person, [or a team of perople]; what matters is that it wasn’t you.

When the end result is built from the works of others, and when the building is also done by an agent other than oneself, there is no legitimate claim of authorship. This is elementary.

This is bullshit. Directors of films and art directors of large teams share credit, but they are not committing plagiarism. I know I am in danger of putting words in Mr. Gemmell’s mouth, but I think the strucutre of his argument is flawed and it is clear that Frank Lloyd Wright working through his students can still be considered an author, an auteur and not a plagiarist.

(“You can’t compare yourself to Frank Lloyd Wright.”)

You don’t have to consider an AI art tool to be an art department, but I don’t think there is anything structurally different about it besides that it’s made of metal and copper and silicon instead of meat and bones and neurons. I think the main arguments against using these novel tools is that they are too new, too scary, too fast, too cheap, and too good at remixing too much reference. They make creativity too easy, so they must be bad.

I disagree. I think poorly written essays or factual inaccuracies parroted by AI chatbots, or poorly realized AI paintings should of course be considered a nuisance, or simply poor art. But I’m not naive enough to think awesome words or awesome images cannot be considered high quality art simply because it was too easy to produce them, or because I used a GPU instead of a room full of underpaid, overworked concept artists working in the game industry in Los Angeles.

They Are Coming for Me Too

I wonder if those who complain the loudest are simply the most surprised that their area of work is the one most recently automated. I’m a programmer, and AI tools are already coming for me. It’s honestly not something I saw coming two years ago, in 2021. I thought I had a decade or more. But I say, if a younger worker can use the tools to accomplish something at a higher speed and higher quality than I can, then I why do I get to name call to preserve my paycheck? Who cares what I think? Maybe I deserve to be replaced.


Life

He Did It Once, He Could Do It Again

Or, “The Primary Was Stolen!”

Or, An open letter to Ron DeSantis.

Dear Actions,

GOP leadership has either stood by or cheerled while Mr. Trump repeated lied that “The election was stolen!” Well Mr. DeSantis, if you didn’t have the spine to counter lies and stand up for democracy then, and you win the GOP primary nomination in the near future, and our orange man-child former president throws a fit and predictably returns again with the same lying refrain, “the primary was stolen!”—and you desperately try to come back with facts, or even win dozens of lawsuits, and that doesn’t stop him nor supporters from repeating his lies, what will happen? He will run against you on a third party with an awful authoritarian name like Truth Party, and he will rip apart any chance at the GOP winning the 2024 election, for one divided side.

And you and Rupert Murdoch and all the little weasely grand old party bootlickers will have deserved it, for letting an authoritarian coward push you around, for giving place to such an unabashed lying rapist PoS America-hating degenerate psychopath, but more especially for training an entire party of tens of millions of supporters to be riled up by endless fear and lies, and to care nothing for reality and facts. Well, I’m here to warn you that reality doesn’t work that way. Maybe in Putin’s shitty, backwards-ass Russia. But back here in reality, cause and effect are tied together.

And by the way, Mr. Trump will do this from his prison cell, if he has to. He won’t let up. Has he ever given any sign that he would ever let up, if it wasn’t to his own personal, immediate benefit? You fed the beast, you fed his tiny little fragile baby ego until it became a giant planet-sized ego, and that all-consuming beast will devour you.

Oh, and ha!

Sincerely, Consequences


Apple

Pair of iPhones Strapped to the Face

Update: June 6, 2023. Apple has announced Apple Vision Pro (hardware product, shipping 2024, $3,500) and visionOS, the platform, with video of the hardware and simulations of platform and feature previews. I feel like I got maybe 90% of this right, especially the emphasis on AR, the extensive use of 2-D windows, and even the use of the Oblong/Underkoffler term “spatial computing.” (Pats self on the back.) A lot of Apple rumor mill folks and hardware leaks talked about the hardware product in correct detail, but with curiosity and even confusion about why it would be compelling from a platform point of view, from a software and functionality point of view. If I may indulge myself, I feel pretty proud that I guessed what Apple saw as the platform potential even though most pundits were intrigued but perplexed, and offered not much enlightenment, unless they are former Oblong Industries employees like me. In which case they all work at Apple anyway (I know of three or four of my former coworkers, most of which are former Apple employees too). End of update.


(Original February 2023 prognostications:)

Apple’s new Reality OS hardware may become a reality in 2023, perhaps this quarter or early next quarter. With many credible rumors flying, a lot of smart people have given what I think are sort of superficial first-order thoughts about Apple’s entrance into the headset market. Apple did things differently with the iPhone and it changed the world and changed Apple. Will Apple just enter the headset space and do the same things as other players?

What is Possible?

What we know about the current state of the art:

It is important to understand just how difficult headset technology is, from a processing power, weight, specifications, and battery standpoint. It is all very non-trivial.

What we know:

  • Apple has custom low-power silicon expertise that could power a portable headset device, probably outpacing the capabilities of any competitors. Apple can sell you a 14” laptop with 96 GB unified memory, which means more VRAM than most desktops and laptops!
  • Apple has camera expertise and entire teams that have built sophisticated pipelines. They are not new to this.
  • Apple has already shipped AR demos in their iOS and iPad devices, which have LiDAR sensors. They have solved some tracking (keeping an object in place) and even occlusion problems in existing software. However, when you are using your phone to change the POV, you can’t use your hands to interact. Apple needs hand tracking, and there is probably no reason Apple can’t do this well.
  • Apple has global map data for their Apple Maps product which could be useful in software that interacts with the real world. Apple has shipped GPS for years now, which tell where you are in the real world at a global scale, down to the building you are in.
  • Apple has beacon technology that can tell devices where objects are in the real world, at the room or building or meter-proximity scale. And near-field expertise to tell when devices touch things in the real world.
  • Apple has shipping technology that can track the face and expressions and deform a 3D avatar, in the form of the otherwise pointless Memoji initiative. This tech would be far from pointless in Reality OS.
  • Spatial computing has been around in research labs and in Sci-Fi for a long time (see Technologies in Spielberg’s 2002 film Minority Report. Underkoffler would later consult on the Marvel Iron Man film and found Oblong Industries. I worked at Oblong from 2010 to 2012.) A lot of these gestures and multi-screen and real-world computing ideas have had decades to stew, so this not actually a totally new software world, even if the specialized hardware hasn’t yet stuck. There is good prior art here.
  • Apple has deep content plays with existing content and pipelines producing more (Apple TV+, Apple Arcade, Apple Fitness+) that could be compelling in a Reality OS world. They don't have to wait for others to make compelling content for this space. They could make it themselves.
  • Apple FaceTime is a well-established video calling system that consumers trust and know how to use. Apple recently introduced SharePlay for remotely working with apps and data while sharing presence. It is half-baked but shows where they want to take things.
  • Apple is good at creating new UI paradigms by taking the best of existing (often external) tech demos and turning this into a coherent blending of hardware and software that proves useful and intuitive to people. You might even say that this is Apple’s sine qua non.

Reality Gambit, or Reality Gamut?

I agree with a lot of the pundits that do not have a strong desire to strap a computer to their skulls, unless the advantages are worth it.

In my view, the holy grail might need to be a device that can

  • Show the real world in high fidelity with no virtual pixels intruding. Bonus points if legacy systems are usable without taking the headset on and off. (More about this below, under Rectangular Proxies.)
  • Show overlays of useful information superimposed on top of the real world, such as floating marquees showing contextual information or directions (Augmented Reality). The user(s) need to be able to interact with this data, avatars, creatures, and whatnot, probably with hand gestures and voice (Siri).
  • Allow the option for a fully virtual experience that can completely block out the outside world for meetings, gaming, relaxation, movie watching, and future blending of these activities. (With outward-facing depth sensors that switch the software back to showing external cameras when danger is imminent, as current VR headsets do.)

I think if Apple can’t create a product and a platform that can span this entire spectrum or Reality Gamut of Mixed Reality = AR + VR, then I think any Apple headset product is dead in the water. We need more from Apple than just AR or just VR to justify a new software platform. But I think this full Reality Gamut could be enough to carry one new device category.

Spectacles

No way will Apple debut in 2023 with lightweight “glasses” that are transparent yet can do all of the above. Not with current technology. They know this is five to twenty years off.

Yet no way would Apple not understand the long term goal: to subsume all computing into one platform or paradigm that can link the old portable and desktop and large-screen paradigms (watchOS, iOS, iPadOS, macOS, tvOS) with a new pervasive computing paradigm, which would eventually require “magical” non-existent spectacles to be long-term viable for most people.

Yet, if Apple can launch an expensive XR (truly mixed gamut of normal reality to AR to fully immersive VR) headset in 2023—with some drawbacks like (a) weight, (b) battery and (c) price, but no compromises on tracking, interaction, lag, etc.—then they can start building the future now. In other words, developers could start writing software this summer, in 2023, that might run with a similar experience on a “magical” spectacle headset in the future, decades hence. I believe this will be their strategy. Most importantly, the interaction paradigm will not need to change even if the hardware changes to allow transparent views of the actual real world instead of cameras projecting the real world onto screens. If the platform is conceptualized and built right, up front, the same software might run on future more-advanced hardware.

Imagine what you could build with just that: a pair of iPhones strapped to your head. You would see out of the high-quality iPhone camera(s) with each eye looking at an expensive, high-dynamic-range, high-resolution “Retina” display or displays. The system could then superimpose pixels with UI and content experiences. Existing LiDAR sensors and AR software can run occlusion so things appear in the real world where they ought to be, especially behind objects. And you could opt into a fully immersive environment for VR if your surroundings were safe, like the “I’m not driving” button in iOS. And mixed experiences, like turning your floor into lava, would make for pointless but powerful demos that would at least get the point across.

For some reason I have not seen tech journalists and pundits frame the headset Gordian Knot the following way (perhaps their jobs are to critique existing things instead of figuring out how to shift the future using logic and creativity?):

  • Apple only needs to solve the software platform paradigm problem up front, while purposely embracing hardware tradeoffs (a), (b), and (c) (above), and commit to making gradual headway on the hardware going forward, while building the app and content experiences that will become increasingly accessible as (a) weight decreases, (b) battery-life improves, and (c) prices drop.

Apple did exactly this with the watch, the phone, the tablet. The original iPhone is terrible by today’s standards, but got the software off the ground. That software still runs in the same paradigm, sixteen years later. The original iPad is terrible by today’s standards, but that iPad was leaps-and-bounds better than anything on the market at the time (and existing, modern iPads are still way better than anything on the market). The original Apple Watch barely did what it needed to do, but allowed the Apple Watch to start off as a platform. The current watches for sale are almost the exact same in terms of software paradigm, as that original Apple Watch, just much better in terms of form factor, battery life, etc.

Rectangular Proxies

I will speculate that Apple could bring legacy software into the Reality OS world by allowing virtual devices of various sizes, from wearable (wrist) to pocketable (phone-sized) to holdable (tablet-sized) to ergonomic desk-like, to movie-theater or fully immersive. My working idea is that they could ship “foam block rounded-rect proxies” with sexy fiducial markers (Apple have done similar with the experience that allows transfering iPhone data to set up a new device, which is way, way sexier than it needs to be). These holdable objects could be interacted with by projecting pixels of virtual screens running legacy apps onto the “devices” as seen from the headset, which would function very similar to current devices including touch, hopefully. I am not sure about the details for this, but they have brought iPhone and iPad software to the Mac once it ran Apple Silicon, so they understand the value of bringing software from one platform to another. They could bring 2D computing into Reality OS, out of the gate in 2023.

Conclusion

I recognize that there are significant challenges to creating a new immersive computing paradigm. I think Apple has a good shot at this, if they take it seriously and have thought deeply about it from a user- and a design- and a “how-things-should-work” perspective. They have done it before, with the Apple II, Mac, iPod, iPhone, iPad, and Apple Watch. I am pretty sure that this new category will not be initially as popular as these six past categories, but I believe eventually it could be more popular. Immersive computing could be the future. I think it is just a matter of when and how.

(And I would rather Apple define that future than Meta—which is just Facebook, a company designed entirely to sell ads through rage-inducing “engagement.”)

Archive