This week, my readings span a diverse set of themes, from the flourishing Spanish real estate market to the deep dives into AI developments. The Bank of Spain’s analysis of the Encuesta Financiera de las Familias (EFF) raises concerns about generational wealth disparities, largely rooted in real estate property ownership. In the realm of AI, an enlightening interview with the founder of DeepSeek provides insights into China’s role in pioneering innovation. I also explored reasoning models through Sebastian Rachka’s illuminating article, and I delved into the limitations of AI agents, with some intriguing solutions outlined in Anthropic’s guide on building effective agents.

In the world of data, I’ve dug into the enduring utility of pivot tables as a vital tool for data analysis, even as they extend beyond Excel into modern data toolkits. Alongside OpenAI’s new reasoning models and its “Deep Research” mode announcement, there’s a troubling shift as Google retreats from its stance on avoiding AI’s use in weaponry.

Real estate

  • El Mercado Inmobiliario Español, en Máximos: Récord De Precios Con Muy Poca Oferta Y Mucha Demanda: The Spanish real estate market reached record highs in 2024, with housing prices increasing significantly due to limited supply and high demand. In December, sale prices hit their peak at 2,271 €/m², while rental prices exceeded 13 €/m² for the first time. Economic growth and a drop in unemployment spurred housing demand, especially in desirable coastal areas. Despite an increase in housing transactions and mortgage formations, a housing shortage persists, pushing many towards rentals. The rental market faces high demand with limited supply, raising costs by 11.5% over the year. The situation is exacerbated by a growing trend of seasonal rentals and insufficient new housing construction.

Data Science

  • Why Pivot Tables Never Die: “Why Pivot Tables Never Die” by Simon Späti highlights the enduring relevance of pivot tables in business analytics, even amidst modern data tools and AI advancements. Pivot tables, a feature of spreadsheets like Excel, aggregate and summarize complex data swiftly, without requiring code, making them accessible and indispensable for data exploration. Their simplicity and intuitive use are rooted in core components: rows, columns, filters, and values. These components allow users to dynamically “slice and dice” data, offering real-time insights. Pivot tables standardize around dimensions and measures, enhancing accessibility for non-technical users. Despite the rise of sophisticated BI tools and AI, pivot tables maintain their status through standardization and ease of use, offering a familiar interface for data interpretation, especially with fast OLAP backends. As data complexity grows, pivot tables integrated into BI tools simplify data handling and advance self-service analytics, potentially evolving into a key BI and AI interface.
  • Bit Prediction: In “Bit Prediction,” Ben Recht explores the fundamental challenge of predicting bits from a sequence of 0s and 1s, which serves as a prototypical problem for statistical prediction. Complex classification tasks can be simplified to this binary prediction. Evaluating methods involves measuring the deviation between predicted and actual outcomes. Recht discusses moving from binary predictions to predictions of probabilities between 0 and 1, expressing confidence levels. He presents three models: i.i.d, exchangeable, and arbitrary sequences, which treat data as random or predetermined. These models yield similar prediction strategies, transforming them into missing data problems where future data points are seen as unknowns predetermined by a process akin to fate. The models unify under the principle that predicting based on the rate within a reference class offers a reasonable solution.

AI

  • Understanding Reasoning LLMs - By Sebastian Raschka, PhD: Sebastian Raschka’s article explores the development and specialization of reasoning LLMs, emphasizing enhancements in reasoning capabilities via specialized training and techniques like inference-time scaling. It discusses DeepSeek’s models, especially DeepSeek-R1, which was refined using both reinforcement learning (RL) and supervised fine-tuning (SFT) to boost reasoning proficiency, exemplifying a successful methodology for training reasoning LLMs. Contrasts are drawn between DeepSeek and OpenAI’s models, noting efficiency and cost differences. The text also highlights model distillation as a cost-efficient alternative for developing smaller yet capable models, although it depends on pre-existing stronger models. Techniques like chain-of-thought prompting and RL are also explored to improve LLM reasoning efficiency, advocating a balanced strategic approach for future models.
  • Interview With Deepseek Founder: We’re Done Following. It’s Time to Lead: In an interview, DeepSeek founder Liang Wenfeng discusses the Chinese AI startup’s journey to become a leader in technological innovation. After their open-source V2 model gained recognition, Liang emphasizes that DeepSeek was not intended as a disruptor, but the company’s focus on foundational research over applications distinguishes it from its competitors. Their approach involves narrowing gaps in training efficiency and pioneering new structures, aiming for Artificial General Intelligence (AGI). DeepSeek remains committed to open-source development, valuing innovation and talent over profit and conventional business models, and believes that the industry will evolve to appreciate deep-tech innovation. Liang also highlights that the industry needs confidence and creativity to bridge the gap between imitation and originality, underlining that the future will require a specialized division of labor in AI advancements.
  • Agents Are Not Enough: In “Agents Are Not Enough,” Chirag Shah explores the evolution and limitations of AI agents, which range from simple tasks like adjusting thermostats to complex functions like autonomous driving. Despite their potential, agents struggle with issues like limited generalization, scalability, coordination, robustness, and ethical concerns. Shah suggests integrating machine learning with symbolic AI, developing new architectures, enhancing coordination mechanisms, and prioritizing ethical design to address these challenges. The article proposes a new ecosystem involving Agents, Sims representing user profiles, and Assistants to ensure personalization, privacy, and trust. By focusing on these areas, agents could become more capable and widely adopted, akin to an app store for vetted agents.
  • Building Effective Agents: The article “Building Effective Agents” by Anthropic explores the development of large language model (LLM) agents across industries. Successful implementations employ simple, composable patterns rather than complex frameworks. Agents, defined as systems that dynamically control processes, contrast with workflows, which follow predefined paths. The article advises using agents for tasks that require flexibility and model-driven decision-making, while workflows offer predictability for specific tasks. Agents are valuable in customer support and coding applications, where they integrate conversation and action. The text highlights constructing agentic systems using foundational LLM blocks, such as augmented LLMs and frameworks that simplify tasks. Simplicity, transparency, and well-documented interfaces are essential for agent design, and developers are encouraged to iterate implementations based on performance outcomes.
  • Introducing Deep Research | OpenAI: OpenAI has launched “Deep Research,” an advanced ChatGPT feature that autonomously conducts extensive research online to deliver detailed reports. Capable of handling complex queries in minutes, it synthesizes data from a vast array of sources, ideal for fields requiring precise information like finance, science, and policy. Built on the OpenAI o3 model, it excels in reasoning and data analysis, though it may occasionally err in fact verification and report formatting. Trained with reinforcement learning, it demonstrated substantial progress on “Humanity’s Last Exam,” notably outperforming in diverse subjects. While effective, it has limitations, such as fact hallucination and confidence calibration, expected to improve over time.
  • Using AI for Coding: My Journey With Cline and Large Language Models: In “Using AI for Coding: My Journey With Cline and Large Language Models,” Paolo Galeone shares his experience using Cline, an AI coding assistant through its VSCode plugin, to enhance development and user experience. Primarily a backend developer, Galeone struggled with frontend tasks due to limited web framework knowledge and CSS aversion. However, AI tools transformed his abilities, allowing for a redesign of his website’s pages, resulting in significant improvements in design and functionality.
  • Our Own Agents With Their Own Tools.: The article “Our Own Agents With Their Own Tools” by Irrational Exuberance explores the creation of a chat interface that allows users to write prompts, select models, and use various tools. Initially intended to be developed with fullstack TypeScript and Cursor, the project shifted to Python3, FastAPI, and PostgreSQL for familiarity. The author highlights the ability to enhance agent power by integrating simple tools and envisions a future where engineers have systems managing events through tool-aware agents. This could automate processes like scheduling and content moderation, making simple applications substantially useful without extensive effort.
  • Deepseek-Ai/DeepSeek-VL2: DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding: DeepSeek-VL2 is an advanced series of Mixture-of-Experts Vision-Language Models developed to enhance multimodal understanding, succeeding its predecessor, DeepSeek-VL. This model series offers improved performance across several tasks, such as visual question answering, optical character recognition, and visual grounding. Available in three variants—DeepSeek-VL2-Tiny, DeepSeek-VL2-Small, and DeepSeek-VL2—these models, with active parameters ranging from 1.0B to 4.5B, achieve competitive or state-of-the-art results with fewer activated parameters compared to existing models.
  • Google Quietly Walks Back Promise Not to Use AI for Weapons or Harm: Google has quietly removed its commitment to ethical AI development from its AI principles page, specifically its pledge not to use AI for weapons or harm. This move, which has drawn significant criticism, suggests a shift in the company’s stance on developing AI technologies, including those with lethal capabilities. Former employees and ethicists express concern that the change negates previous efforts to promote ethical AI usage. Google follows OpenAI, which similarly retracted commitments against using AI for military purposes. This trend reflects the tech industry’s prioritization of profit and investor pressure over ethical considerations.

Economics

  • El Análisis De La Riqueza en España: 20 Años De La Encuesta Financiera De Las Familias: The Banco de España’s report, “El Análisis De La Riqueza en España,” highlights 20 years of data from the Encuesta Financiera de las Familias (EFF), offering crucial insights into Spanish households’ financial situations. Initiated in 2002, this pioneering survey in the eurozone provides detailed information on assets, income, and debts. The EFF, now conducted biennially, enables analysis on wealth distribution across generations, revealing significant trends, such as declining home ownership among younger Spaniards. It has influenced similar surveys across the eurozone, contributing to the ECB’s Household Finance and Consumption Survey.

Others

  • Cuts to Maths Are a National Miscalculation: Marcus du Sautoy argues that the UK government’s ambitions to lead in artificial intelligence are undermined by its approach to mathematics education. He highlights the critical role of mathematics in powering AI and expresses concern over funding cuts to the Advanced Mathematics Support Programme, which threatens the development of a robust talent pipeline. He underscores the importance of inspiring and knowledgeable math teachers to foster the next generation and warns that neglecting math education will hinder growth in both AI innovation and the broader scientific community.