rw-book-cover

Metadata

Highlights

  • Last year, I wrote about so-called AI and how it’s not the threat to knowledge work that we may think: My thesis was rooted in the idea that knowledge work’s worst threat is itself, the proliferation of David Graeber’s “bullshit jobs” that have the aesthetics of knowledge without the content. (View Highlight)
  • This lack of content—the content that creates impact, not the content that generative AI can mimic at scale—is what drives the angst of the knowledge worker, so few of whom create anything from what they know, let alone learn new things that can be applied in new ways. The widgetization of knowledge work is the threat, and the perceived ability of AI to replace knowledge workers is an admonishment of the pseudo-productivity these roles reward. As I wrote last year, “AI can replace your job because your job right now is meaningless.” (View Highlight)
  • When I read this essay now, over a year later, I am struck by how true it remains. The swift destruction of labor as we know it has failed to emerge.1 A new GPT model release feels like a new iPhone: more of the same. And as we continue to discover, much of what AI impressively does is actually humans. (View Highlight)
  • The AI hypecycle has been well-documented, but that it remains a fairly niche2 product is curious to me. My friends are a fairly specific bunch to be fair, but I know only two who use generative AI tools regularly, and one of those is required to for work. My sole use case remains automated transcriptions, though during my job search I did try using AI tools as was endlessly suggested to me and found them entirely useless. (View Highlight)
  • see my time being pushed to cognitively demanding tasks, strategic analysis, and human connection: the very things AI cannot do. There is nothing that I do for clients or myself that can be replaced by AI, because my work and my time has content. I do not move around information, but apply knowledge in novel settings with discernment. (View Highlight)
  • AI does not threaten me, but I know it will continue to be used to threaten workers. It is a tool of control, and as such is the ideal load-bearing pillar of an economy that is increasingly more psychological than financial, ruled by a fantasy of rationality that privileges its believers. AI is a tool of this time, something we can project on, believe in, hope for even as it fails to do the bare minimum of what has been promised. (View Highlight)
  • My formative years were spent at a school founded by a guy who wrote this about the purpose of education: “Goodness without knowledge is weak and feeble, yet knowledge without goodness is dangerous, and that both united form the noblest character, and lay the surest foundation of usefulness to mankind.” (View Highlight)
  • And there’s the rub: knowledge, insofar as it continues to exist as something separate from information, is not inherently good, and the application of it is a moral activity. Many workers are neutered then, unable to develop knowledge or goodness, restricted in their movements by the confines of a technocratic system developed with a religious belief in rationality that leaves no room for personal moral or intellectual development, a system that so fears weakness that it traps goodness in spreadsheet calculations and access to a simulacrum of knowledge in language models, coldly removing the human element required for the development of a noble character. (View Highlight)
  • AI cannot be good. It cannot wield knowledge with experience, and it cannot discern the ethics of application because it cannot think, and thus cannot truly be useful. Any utility it achieves will be due to the humans who leverage it, and like with any tool, the goodness of the humans will dictate the outcomes. But what kind of humans will be left if what we learn and what we make are funneled through the very tool we are meant to control? (View Highlight)