Palantir is hot now. The company recently joined the S&P 500. The stock is on a tear, and the company is nearing a $100bn market cap. VCs chase ex-Palantir founders asking to invest. (View Highlight)
For long-time employees and alumni of the company, this feels deeply weird. During the 2016-2020 era especially, telling people you worked at Palantir was unpopular. The company was seen as spy tech, NSA surveillance, or worse. There were regular protests outside the office. Even among people who didn’t have a problem with it morally, the company was dismissed as a consulting company masquerading as software, or, at best, a sophisticated form of talent arbitrage. (View Highlight)
First, I wanted to work in ‘difficult’ industries on real, meaningful problems. My area of interest – for personal reasons - was healthcare and bio, which the company had a nascent presence in. The company was talking about working in industries like healthcare, aerospace, manufacturing, cybersecurity, and other industries that I felt were very important but that most people were not, at the time, working on. At the time the hot things were social networks (Facebook, LinkedIn, Quora, etc.) and other miscellaneous consumer apps (Dropbox, Uber, Airbnb) but very few companies were tackling what felt like the real, thorny parts of the economy. If you wanted to work on these ‘harder’ areas of the economy but also wanted a Silicon Valley work culture, Palantir was basically your only option for awhile. (View Highlight)
Second, talent density. I talked to some of the early people who started the healthcare vertical (Nick Perry, Lekan Wang, and Andrew Girvin) and was extremely impressed. I then interviewed with a bunch of the early business operations and strategy folks and came away even more impressed. These were seriously intense, competitive people who wanted to win, true believers; weird, fascinating people who read philosophy in their spare time, went on weird diets, and did 100-mile bike rides for fun. This, it turned out, was an inheritance from the Paypal mafia. Yishan Wong, who was early at Paypal, wrote about the importance of intensity: (View Highlight)
“In general, as I begin to survey more startups, I find that the talent level at PayPal is not uncommon for a Silicon Valley startup, but the differentiating factor may have been the level of intensity from the top:both Peter Thiel and Max Levchin were extremely intense people - hyper-competitive, hard-working, and unwilling to accept defeat*. I think this sort of leadership is what pushes the “standard” talented team to be able to do great things and, subsequently, contributes to producing a wellspring of later achievements.”* (View Highlight)
Palantir was an unusually weird place, too. I remember my first time I talked to Stephen Cohen he had the A/C in his office set at 60, several weird-looking devices for minimizing CO2 content in the room, and had a giant pile of ice in a cup. Throughout the conversation, he kept chewing pieces of ice. (Apparently there are cognitive benefits to this.) (View Highlight)
I like to meet candidates with no data about them: no résumé, no preliminary discussions or job description, just the candidate and me in a room. I ask a fairly random question, one that is orthogonal to anything they would be doing at Palantir. I then watch how they disaggregate the question, if they appreciate how many different ways there are to see the same thing. I like to keep interviews short, about 10 minutes. Otherwise, people move into their learned responses and you don’t get a sense of who they really are. (View Highlight)
My interviews were often not about work or software at all – one of my interviews we just spent an hour talking about Wittgenstein. Note that both Peter Thiel and Alex Karp were philosophy grads. Thiel’s lecture notes had come out not long before and they discussed Shakespeare, Tolstoy, Girard (then unknown, now a cliché) and more. (View Highlight)
When I joined, Palantir was divided up into two types of engineers:
Engineers who work with customers, sometimes known as FDEs, forward deployed engineers.
Engineers who work on the core product team (product development - PD), and rarely go visit customers.
FDEs were typically expected to ‘go onsite’ to the customer’s offices and work from there 3-4 days per week, which meant a ton of travel. This is, and was, highly unusual for a Silicon Valley company. (View Highlight)
There’s a lot to unpack about this model, but the key idea is that you gain intricate knowledge of business processes in difficult industries (manufacturing, healthcare, intel, aerospace, etc.) and then use that knowledge to design software that actually solves the problem. The PD engineers then ‘productize’ what the FDEs build, and – more generally – build software that provides leverage for the FDEs to do their work better and faster. [2] (View Highlight)
This is how much of the Foundry product took initial shape: FDEs went to customer sites, had to do a bunch of cruft work manually, and PD engineers built tools that automated the cruft work. Need to bring in data from SAP or AWS? Here’s Magritte (a data ingestion tool). Need to visualize data? Here’s Contour (a point and click visualization tool). Need to spin up a quick web app? Here’s Workshop (a Retool-like UI for making webapps). Eventually, you had a damn good set of tools clustered around the loose theme of ‘integrate data and make it useful somehow’. (View Highlight)
At the time, it was seen as a radical step to give customers access to these tools — they weren’t in a state for that — but now this drives 50%+ of the company’s revenue, and it’s called Foundry. Viewed this way, Palantir pulled off a rare services company → product company pivot: in 2016, descriptions of it as a Silicon Valley services company were not totally off the mark, but in 2024 they are deeply off the mark, because the company successfully built an enterprise data platform using the lessons from those early years, and it shows in the gross margins - 80% gross margins in 2023. These are software margins. Compare to Accenture: 32%. (View Highlight)
Tyler Cowen has a wonderful saying, ‘context is that which is scarce’, and you could say it’s the foundational insight of this model. Going onsite to your customers – the startup guru Steve Blank calls this “getting out of the building” – means you capture the tacit knowledge of how they work, not just the flattened ‘list of requirements’ model that enterprise software typically relies on. The company believed this to a hilarious degree: it was routine to get a call from someone and have to book a first-thing-next-morning flight to somewhere extremely random; “get on a plane first, ask questions later” was the cultural bias. This resulted in out of control travel spend for a long time — many of us ended up getting United 1K or similar — but it also meant an intense decade-long learning cycle which eventually paid off. (View Highlight)
The CEO told us his biggest problem was scaling up A350 manufacturing. So we ended up building software to directly tackle that problem. I sometimes describe it as “Asana, but for building planes”. You took disparate sources of data — work orders, missing parts, quality issues (“non-conformities”) — and put them in a nice interface, with the ability to check off work and see what other teams are doing, where the parts are, what the schedule is, and so on. Allow them the ability to search (including fuzzy/semantic search) previous quality issues and see how they were addressed. These are all sort of basic software things, but you’ve seen how crappy enterprise software can be - just deploying these ‘best practice’ UIs to the real world is insanely powerful. This ended up helping to drive the A350 manufacturing surge and successfully 4x’ing the pace of manufacturing while keeping Airbus’s high standards of quality. (View Highlight)
This made the software hard to describe concisely - it wasn’t just a database or a spreadsheet, it was an end-to-end solution to that specific problem, and to hell with generalizability. Your job was to solve the problem, and not worry about overfitting; PD’s job was to take whatever you’d built and generalize it, with the goal of selling it elsewhere. (View Highlight)
FDEs tend to write code that gets the job done fast, which usually means – politely – technical debt and hacky workarounds. PD engineers write software that scales cleanly, works for multiple use cases, and doesn’t break. One of the key ‘secrets’ of the company is that generating deep, sustaining enterprise value requires both. BD engineers tend to have high pain tolerance, the social and political skills needed to embed yourself deep in a foreign company and gain customer trust, and high velocity – you need to build something that delivers a kernel of value fast so that customers realize you’re the real deal. It helped that customers had hilariously low expectations of most software contractors, who were typically implementors of SAP or other software like that, and worked on years-long ‘waterfall’ style timescales. So when a ragtag team of 20-something kids showed up to the customer site and built real software that people could use within a week or two, people noticed. (View Highlight)
This two-pronged model made for a powerful engine. Customer teams were often small (4-5 people) and operated fast and autonomously; there were many of them, all learning fast, and the core product team’s job was to take those learnings and build the main platform. (View Highlight)
The world needs more companies like SpaceX, and Palantir, that differentiate on execution - achieving the outcome - not on playing political games or building narrow point solutions that don’t hit the goal. (View Highlight)
Another key thing FDEs did was data integration, a term that puts most people to sleep. This was (and still is) the core of what the company does, and its importance was underrated by most observers for years. In fact, it’s only now with the advent of AI that people are starting to realize the importance of having clean, curated, easy-to-access data for the enterprise. (See: the ‘it’ in AI models is the dataset). (View Highlight)
In simple terms, ‘data integration’ means (a) gaining access to enterprise data, which usually means negotiating with ‘data owners’ in an organization (b) cleaning it and sometimes transforming it so that it’s usable (c) putting it somewhere everyone can access it. Much of the base, foundational software in Palantir’s main software platform (Foundry) is just tooling to make this task easier and faster. (View Highlight)
Why is data integration so hard? The data is often in different formats that aren’t easily analyzed by computers – PDFs, notebooks, Excel files (my god, so many Excel files) and so on. But often what really gets in the way is organizational politics: a team, or group, controls a key data source, the reason for their existence is that they are the gatekeepers to that data source, and they typically justify their existence in a corporation by being the gatekeepers of that data source (and, often, providing analyses of that data). [3] This politics can be a formidable obstacle to overcome, and in some cases led to hilarious outcomes – you’d have a company buying an 8-12 week pilot, and we’d spend all 8-12 weeks just getting data access, and the final week scrambling to have something to demo. (View Highlight)
The other ‘secret’ Palantir figured out early is that data access tussles were partly about genuine data security concerns, and could be alleviated through building security controls into the data integration layer of the platform - at all levels. This meant role-based access controls, row-level policies, security markings, audit trails, and a ton of other data security features that other companies are still catching up to. Because of these features, implementing Palantir often made companies’ data more secure, not less. [4] (View Highlight)
The overall ‘vibe’ of the company was more of a messianic cult than a normal software company. But importantly, it seemed that criticism was highly tolerated and welcomed – one person showed me an email chain where an entry-level software engineer was having an open, contentious argument with a Director of the company with the entire company (around a thousand people) cc’d. As a rationalist-brained philosophy graduate, this particular point was deeply important to me – I wasn’t interested in joining an uncritical cult. But a cult of skeptical people who cared deeply and wanted to argue about where the world was going and how software fit into it – existentially – that was interesting to me. [5] (View Highlight)
I’m not sure if they still do this, but at the time when you joined they sent you a copy of Impro, The Looming Tower (9/11 book), Interviewing Users, and Getting Things Done. I also got an early PDF version of what became Ray Dalio’s Principles. This set the tone. The Looming Tower was obvious enough – the company was founded partly as a response to 9/11 and what Peter felt were the inevitable violations of civil liberties that would follow, and the context was valuable. But why Impro? (View Highlight)
Being a successful FDE required an unusual sensitivity to social context – what you really had to do was partner with your corporate (or government) counterparts at the highest level and gain their trust, which often required playing political games. Impro is popular with nerds partly because it breaks down social behavior mechanistically. The vocabulary of the company was saturated with Impro-isms – ‘casting’ is an example. Johnstone discusses how the same actor can play ‘high status’ or ‘low status’ just by changing parts of their physical behavior – for example, keeping your head still while talking is high status, whereas moving your head side to side a lot is low status. Standing tall with your hands showing is high status, slouching with your hands in your pocket is low status. And so on. If you didn’t know all this, you were unlikely to succeed in a customer environment. Which meant you were unlikely to integrate customer data or get people to use your software. Which meant failure. (View Highlight)
This is one reason why former FDEs tend to be great founders. (There are usually more ex-Palantir founders than there are ex-Googlers in each YC batch, despite there being ~50x more Google employees.) Good founders have an instinct for reading rooms, group dynamics, and power. This isn’t usually talked about, but it’s critical: founding a successful company is about taking part in negotiation after negotiation after negotiation, and winning (on net). Hiring, sales, fundraising are all negotiations at their core. It’s hard to be great at negotiating without having these instincts for human behavior. This is something Palantir teaches FDEs, and is hard to learn at other Valley companies. (View Highlight)
Another is that FDEs have to be good at understanding things. Your effectiveness directly correlates to how quickly you can learn to speak the customer’s language and really drill down into how their business works. If you’re working with hospitals, you quickly learn to talk about capacity management and patient throughput vs. just saying “help you improve your healthcare”. Same with drug discovery, health insurance, informatics, cancer immunotherapy, and so on; all have specialized vocabularies, and the people who do well tend to be great at learning them fast. (View Highlight)
One of my favorite insights from Tyler Cowen’s book ‘Talent’ is that the most talented people tend to develop their own vocabularies and memes, and these serve as entry points to a whole intellectual world constructed by that person. Tyler himself is of course a great example of this. Any MR reader can name 10+ Tylerisms instantly - ‘model this’, ‘context is that which is scarce’, ‘solve for the equilibrium’, ‘the great stagnation’ are all examples. You can find others who are great at this. Thiel is one. Elon is another (“multiplanetary species”, “preserving the light of consciousness”, etc. are all memes). Trump, Yudkowsky, gwern, SSC, Paul Graham, all of them regularly coin memes. It turns out that this is a good proxy for impact. (View Highlight)
This insight goes for companies, too, and Palantir had its own, vast set of terms, some of which are obscure enough that “what does Palantir actually do?” became a meme online. ‘Ontology’ is an old one, but then there is ‘impl’, ‘artist’s colony’, ‘compounding’, ‘the 36 chambers’, ‘dots’, ‘metabolizing pain’, ‘gamma radiation’, and so on. The point isn’t to explain all of these terms, each of which compresses a whole set of rich insights; it’s that when you’re looking for companies to join, you could do worse than look for a rich internal language or vocabulary that helps you think about things in a more interesting way. (View Highlight)
When Palantir’s name comes up, most people think of Peter Thiel. But many of these terms came from early employees, especially Shyam Sankar, who’s now the President of the company. Still, Peter is deeply influential in the company culture, even though he wasn’t operationally involved with the company at all during the time I was there. This document, written by Joe Lonsdale, was previously an internal document but made public at some point and gives a flavor for the type of cultural principles. (View Highlight)
One of the things that (I think) came from Peter was the idea of not giving people titles. When I was there, everyone had the “forward deployed engineer” title, more or less, and apart from that there were five or six Directors and the CEO. Occasionally someone would make up a different title (one guy I know called himself “Head of Special Situations”, which I thought was hilarious) but these never really caught on. It’s straightforward to trace this back to Peter’s Girardian beliefs: if you create titles, people start coveting them, and this ends up creating competitive politics inside the company that undermines internal unity. Better to just give everyone the same title and make them go focus on the goal instead. (View Highlight)
There are plenty of good critiques of the ‘flat hierarchy’ stance — The Tyranny of Structurelessness is a great one – and it largely seems to have fallen out of fashion in modern startups, where you quickly get CEO, COO, VPs, Founding Engineers, and so on. But my experience is that it worked well at Palantir. Some people were more influential than others, but the influence was usually based on some impressive accomplishment, and most importantly nobody could tell anyone else what to do. So it didn’t matter if somebody was influential or thought your idea was dumb, you could ignore them and go build something if you thought it was the right thing to do. On top of that, the culture valorized such people: stories were told of some engineer ignoring a Director and building something that ended up being a critical piece of infrastructure, and this was held up as an example to imitate. (View Highlight)
The cost of this was that the company often felt like there was no clear strategy or direction, more like a Petri dish of smart people building little fiefdoms and going off in random directions. But it was incredibly generative. It’s underrated just how many novel UI concepts and ideas came out of that company. Only some of these now have non-Palantir equivalents, e.g. Hex, Retool, Airflow all have some components that were first developed at Palantir. The company’s doing the same for AI now – the tooling for deploying LLMs at large enterprises is powerful. (View Highlight)
The ‘no titles’ thing also meant that people came in and out of fashion very quickly, inside the company. Because everyone had the same title, you had to gauge influence through other means, and those were things like “who seems really tight with this Director right now” or “who is leading this product initiative which seems important”, not “this person is the VP of so and so”. The result was a sort of hero-shithead rollercoaster at scale – somebody would be very influential for awhile, then mysteriously disappear and not be working on anything visible for months, and you wouldn’t ever be totally sure what happened. (View Highlight)
Another thing I can trace back to Peter is the idea of talent bat-signals. Having started my own company now (in stealth for the moment), I appreciate this a lot more: recruiting good people is hard, and you need a differentiated source of talent. If you’re just competing against Facebook/Google for the same set of Stanford CS grads every year, you’re going to lose. That means you need a set of talent that is (a) interested in joining you in particular, over other companies (b) a way of reaching them at scale. Palantir had several differentiated sources of recruiting alpha. (View Highlight)
First, there were all the people who were pro defense/intelligence work back when that wasn’t fashionable, which selected for, e.g., smart engineers from the Midwest or red states more than usual, and also plenty of smart ex-army, ex-CIA/NSA types who wanted to serve the USA but also saw the appeal in working for a Silicon Valley company. My first day at the company, I was at my team’s internal onboarding with another guy, who looked a bit older than me. I asked him what he’d done before Palantir. With a deadpan expression, he looked me in the eye and said “I worked at the agency for 15 years”. I was then introduced to my first lead, who was a former SWAT cop in Ohio (!) and an Army vet. (View Highlight)
There were lots of these people, many extremely talented, and they mostly weren’t joining Google. Palantir was the only real ‘beacon’ for these types, and the company was loud about supporting the military, being patriotic, and so on, when that was deeply unfashionable. That set up a highly effective, unique bat-signal. (Now there’s Anduril, and a plethora of defence and manufacturing startups). [6] (View Highlight)
Second, you had to be weird to want to join the company, at least after the initial hype wave died down (and especially during the Trump years, when the company was a pariah). Partly this was the aggressive ‘mission focus’ type branding back when this was uncommon, but also the company was loud about the fact that people worked long hours, were paid lower than market, and had to travel a lot. Meanwhile, we were being kicked out of Silicon Valley job fairs for working with the government. All of this selected for a certain type of person: somebody who can think for themselves, and doesn’t over-index on a bad news story. (View Highlight)
The morality question is a fascinating one. The company is unabashedly pro-West, a stance I mostly agree with – a world more CCP-aligned or Russia-aligned seems like a bad one to me, and that’s the choice that’s on the table. [7] It’s easy to critique free countries when you live in one, harder when you’ve experienced the alternative (as I have - I spent a few childhood years in a repressive country). So I had no problem with the company helping the military, even when I disagreed with some of the things the military was doing. (View Highlight)
But doesn’t the military sometimes do bad things? Of course - I was opposed to the Iraq war. This gets to the crux of the matter: working at the company was neither 100% morally good — because sometimes we’d be helping agencies that had goals I’d disagree with — nor 100% bad: the government does a lot of good things, and helping them do it more efficiently by providing software that doesn’t suck is a noble thing. One way of clarifying this is to break down the company’s work into three buckets – these categories aren’t perfect, but bear with me:
Morally neutral. Normal corporate work, e.g. FedEx, CVS, finance companies, tech companies, and so on. Some people might have a problem with it, but on the whole people feel fine about these things.
Unambiguously good. For example, anti-pandemic response with the CDC; anti-child pornography work with NCMEC; and so on. Most people would agree these are good things to work on.
Grey areas. By this I mean I mean ‘involve morally thorny, difficult decisions’: examples include health insurance, immigration enforcement, oil companies, the military, spy agencies, police/crime, and so on. (View Highlight)
Every engineer faces a choice: you can work on things like Google search or the Facebook news feed, all of which seem like marginally good things and basically fall into category 1. You can also go work on category 2 things like GiveDirectly or OpenPhilanthropy or whatever. (View Highlight)
The critical case against Palantir seemed to be something like “you shouldn’t work on category 3 things, because sometimes this involves making morally bad decisions”. An example was immigration enforcement during 2016-2020, aspects of which many people were uncomfortable with.
But it seems to me that ignoring category 3 entirely, and just disengaging with it, is also an abdication of responsibility. Institutions in category 3 need to exist. The USA is defended by people with guns. The police have to enforce crime, and - in my experience - even people who are morally uncomfortable with some aspects of policing are quick to call the police if their own home has been robbed. Oil companies have to provide energy. Health insurers have to make difficult decisions all the time. Yes, there are unsavory aspects to all of these things. But do we just disengage from all of these institutions entirely, and let them sort themselves out? (View Highlight)
As with Palantir, working on AI probably isn’t 100% morally good, nor is it 100% evil. Not engaging with it – or calling for a pause/stop, which is a fantasy – is unlikely to be the best stance. Even if you don’t work at OpenAI or Anthropic, if you’re someone who could plausibly work in AI-related issues, you probably want to do so in some way. There are easy cases: build evals, work on alignment, work on societal resilience. But my claim here is that the grey area is worth engaging in too: work on government AI policy. Deploy AI into areas like healthcare. Sure, it’ll be difficult. Plunge in.8 (View Highlight)
When I think about the most influential people in AI today, they are almost all people in the room - whether at an AI lab, in government, or at an influential think tank. I’d rather be one of those than one of the pontificators. Sure, it’ll involve difficult decisions. But it’s better to be in the room when things happen, even if you later have to leave and sound the alarm. (View Highlight)
Am I bullish on the company still? The big productivity gains of this AI cycle are going to come when AI starts providing leverage to the large companies and businesses of this era - in industries like manufacturing, defense, logistics, healthcare and more. Palantir has spent a decade working with these companies. AI agents will eventually drive many core business workflows, and these agents will rely on read/write access to critical business data. Spending a decade integrating enterprise data is the critical foundation for deploying AI to the enterprise. The opportunity is massive. (View Highlight)