<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Michał Gancarski</title>
  <subtitle>Michał Gancarski writes about technology, creativity and the human side of building software.</subtitle>
  <link href="https://gancarski.pl/feed.xml" rel="self"/>
  <link href="https://gancarski.pl/"/>
  <updated>2026-03-08T00:00:00Z</updated>
  <id>https://gancarski.pl/</id>
  <author>
    <name>Michał Gancarski</name>
  </author>
  <entry>
    <title>Interview: Data Innovation Summit</title>
    <link href="https://gancarski.pl/writing/interview-data-innovation-summit---6e2e8d7/"/>
    <updated>2020-03-10T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/interview-data-innovation-summit---6e2e8d7/</id>
    <content type="html">&lt;p&gt;&lt;em&gt;Note: while the interview was conducted in March 2020, due to the COVID-19 pandemic the conference was moved to August. The interview starts right below. It is available on the &lt;a href=&quot;https://hyperight.com/up-close-overview-of-data-engineering-from-a-data-engineers-point-of-view/&quot;&gt;organizer’s website&lt;/a&gt; as well.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;[Q] Hi Michal, we are happy to have you as a speaker representing Zalando at the 5th Celebrate edition of the Data Innovation Summit. It’s your first time with us, so please tell us a bit more about yourself and your role at Zalando.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I started my career in software development as a web developer in Krakow, focusing mostly on smaller, freelance projects. After moving to Berlin over six years ago I joined RapidApe, an ad-tech startup where I had the opportunity to tackle more complex issues like managing data models, maintaining and developing toolkits for building analytical dashboards, building an analytics API and, more importantly, diving into data processing workflows necessary to keep the operation running.&lt;/p&gt;
&lt;p&gt;This experience allowed me to successfully apply for a backend engineering position at Zalando, where I quickly switched to what interested me the most - data engineering. Since then I spent most of my time at Zalando working on various subsystems of the Data Lake the company was building. I have taken part in diverse projects like centralised collection of dataset metadata, pipelines delivering those datasets, access management for tens of engineering teams and others.&lt;/p&gt;
&lt;p&gt;Currently, while still at Zalando, I am focusing less on data infrastructure and more on development of data and machine learning pipelines. I am a member of a team that applies tools of data science to help Zalando automate and improve its buying decisions with respect to distributions of apparel sizes for various combinations of clothing categories and styles. There is an (as of yet) untapped potential there to reduce waste, optimize stock and, in the end, positively influence the bottom line of the company.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;[Q]As 2020 is the year in which the Data Innovation Summit turns 5, could you point out what have been the most important developments with data and advanced analytics in the last 5 years according to you?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Five years in data engineering seems like a long time. It is hard to believe, for example, that distributed data processing engines like Apache Spark and Apache Flink, are only twice as old, even if we take into account early development periods in academia.&lt;/p&gt;
&lt;p&gt;Anyway, there were many important developments in data and analytics over this period. Let me talk about what I think are the most significant ones:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The emergence of scalable and relatively inexpensive cloud storage in the form of object stores, like AWS S3 or Google Cloud Storage. At first, those services were facing issues with compatibility with common frameworks like Hadoop Map-Reduce. Fortunately, over time libraries were developed to handle those issues transparently. Nowadays, large object stores serve as a de-facto replacement for clusters running distributed file systems like HDFS, serving as data sources and data sinks for computation, processing and query engines like Spark, Presto, Impala or Flink.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Data Lakes as repositories of datasets complementary to traditional data warehouses and data marts. While the concept of the Data Lake is older than five years (it dates back to the year 2010), its adoption only accelerated more recently. This is partially a consequence of the previous trend. With operational simplification and sinking cost of data storage, companies preserve more and more datasets in their raw form, to process them differently depending on the use case. Sometimes, to train machine learning models (which requires outputting new attributes through feature engineering, combined with flat, denormalized schemas), sometimes for more traditional BI applications. The later usually means normalized star and snowflake schemas.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Notebooks as interactive gateways to data analysis, data science and engineering. The ability to mix different programming and query languages, visualize data and leave explanatory comments in one, shareable environment, changed the way we work with data. For example, a data scientist can share her report or experiment (code, data, discussion of the methodology) with the rest of the team in one place. This notebook can be then used by other data scientists to validate the results, or by business analysts to generate a simplified report and communicate the impact of the results to decision makers leading the organization. Even more, data engineers can use the same notebook to improve their understanding of what data and in what form is needed to turn the experiment into a production pipeline. In fact, I see more and more data engineers working with notebooks to prototype data pipelines and experiment with various ways of expressing required data transformations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Python becoming the common language of data engineering and data science, slowly (but not completely) replacing Java, R or Matlab. A modern team consisting of data engineers and data scientists, can perform most of its tasks using Python and libraries written for it. This includes building and scheduling data pipelines, interacting with cloud infrastructure, performing preliminary data analysis, prototyping or deploying machine learning models in production. Even if core of some of the libraries and frameworks are implemented in more performant languages (like C++ in case of TensorFlow or Scala when we talk about Spark), there is always a way of interacting with them using Python. We got to the point where we see job openings asking explicitly for “Python Data Engineers”. To be clear, Python will never fully replace other languages but at the moment it is the safest bet for someone willing to start their career in a broadly understood field of data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stream processing rising in popularity and enabling new applications of analytics and machine learning to problems like financial fraud detection, optimization of online advertising and recommendations, but also IoT in general. Especially the last one looks significant. From electric scooters to monitoring of industrial devices or public transit systems - we are seeing huge improvements in all of those areas. Thanks to advances in engines like Apache Flink and other developments, like introduction of Structured Streaming to Apache Spark, it is becoming easier to express correct, complex computations on streaming data and compose those into larger workflows. It is, in essence, expansion of dataflow programming into the world of large scale, distributed systems handling high volume real-time data streams.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;[Q] You are going to present at the Data Engineering Stage on how to design, build, deploy and monitor a serverless data infrastructure. As you yourself are a data engineer, your presentation is of a more technical nature. Could you explain to us how the initiative for a serverless data infrastructure began in Zalando and what are the benefits from it?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When I joined Zalando, the company was already deploying nearly all of its microservices in the cloud (in this case - on AWS), using tooling built internally for this particular purpose. However, there was still no certainty on how to take advantage of serverless components in the context of data processing. While it was clear that S3, Amazon’s large-scale object store, is going to be the go-to location for Data Lake datasets, our thinking about data pipelines still gravitated towards traditional applications deployed on EC2 instances that, for all intents and purposes, are managed virtual machines.&lt;/p&gt;
&lt;p&gt;This approach has proven to be of limited scalability in terms of engineering capacity available to the Data Lake team. Given the complexity of building and managing a Data Lake with thousands of continuously updated datasets, we were looking for ways to offload as much operational complexity as possible, to the cloud vendor and let its infrastructure handle growing data volumes, traffic and scheduling density.&lt;/p&gt;
&lt;p&gt;Since essentially every data pipeline is a collection of queues, schedulers, workflow managers, processing engines and, last but not least, storage layers, we started looking into replacing more traditional tools with their serverless counterparts, like SQS (Simple Queue Service) and using AWS Lambda (lightweight units of stateless computation) to compose more elaborate applications.&lt;/p&gt;
&lt;p&gt;The biggest breakthrough in this direction came when AWS Step Functions, Amazon’s serverless workflow offering, became available in Europe. We decided to try using them for a prototype version of one of our pipelines and it worked out really well. The pipeline was put into production much faster than we could otherwise do. So far, it has been running without major incidents for years.&lt;/p&gt;
&lt;p&gt;After this initial positive experience, we have decided to double down on the approach, not only for new pipelines, but also for rebuilding those that were already in place. This way a relatively small team was able to maintain and develop a petabyte-scale Data Lake and operate several large pipelines that not only deliver a fixed collection of datasets but also let other teams at Zalando add more of them using a self-service approach.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;[Q] Is there anything you have to be careful about when building serverless data infrastructure?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;There are several aspects of serverless data infrastructure that companies need to be mindful about. They are mostly related to the way cloud vendors operate.&lt;/p&gt;
&lt;p&gt;First of all, vendor lock-in. This may or may not be a significant issue, depending on how we look at it. However, some mitigation strategies in how, for example, data pipelines are built, can be put in place. If you use Google Cloud Functions or AWS Lambda, try to write as much of their code in a way that is transferable to other platforms. More generally, when dealing with distributed computation, make sure you can express it in a portable way. For example, a stream processing job written for Apache Flink can be reused in Kinesis Data Analytics on AWS.&lt;/p&gt;
&lt;p&gt;Second, mind the scalability of your budget and projected cost of storage and computation. While cloud infrastructure promises rapid scalability without all the operational hassle usually associated with it, it will scale beyond the size you may want to pay for. It is easy to add data to an “unlimited” object store but with every additional gigabyte you will have to pay more on an ongoing basis. To mitigate that, ensure proper data retention policy that will make sure unused and low value data assets are deleted or moved to cheaper storage classes.&lt;/p&gt;
&lt;p&gt;Third, remember that cloud and serverless is just a layer (or several layers) of software sitting on top of physical data centers. This means that at some point you may reach soft or hard limits of your cloud provider. Soft limits are usually easy to handle by contacting customer support. However, before you decide to use a service being part of the offering of your cloud vendor, check whether they have hard limits imposed on some of their dimensions (data and request throughput, scaling out, scaling up, etc). This way you can avoid nasty surprises at a critical moment when you are the most vulnerable, i.e. when you need to scale out further but cannot do it or are slower to do so than expected.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;[Q] What are some data engineering trends that would mark 2020 according to you?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Apart from the continuation of what was happening for the last five years, an additional one comes to my mind.&lt;/p&gt;
&lt;p&gt;In 2020 we will see growing popularity and adoption of table formats offering transactional guarantees on large datasets stored in cloud object stores. I am talking about solutions like Delta Lake, Apache Hudi or Apache Iceberg. Their biggest draw is that they bring back ACID properties to large datasets accessed and processed by a diverse ecosystem of computation frameworks. Working with storage formats that ensure snapshot isolation or non-conflicting, transactional writes originating in multiple sources, can greatly simplify (and sometimes even enable) many data engineering tasks.&lt;/p&gt;
&lt;p&gt;As a consequence of the above, we will see further unification of stream and batch data processing in terms of how we express data transformations but also how we store and transmit data in our daily workflows.&lt;/p&gt;
&lt;p&gt;Current advances in this area include, among others, streaming support for Iceberg in Apache Flink, achieved by Netflix - the original creators of Iceberg. Another noteworthy development is continuously improving integration of Delta Lake (which originated at Databricks) not only with Apache Spark but also with other engines like Presto or Redshift Spectrum.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;[Q] Some experts predict that the solution for the lack of data engineers would lead to “citizen data engineers” - employees outside of the data engineering team will oversee and manage data pipelines, as well as the overall data lifecycle in order to meet data engineering needs. Do you see this happening in 2020?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We are already seeing this happening at Zalando to a certain extent, with some of the most important (and largest) data pipelines being co-managed by teams interested in particular datasets. In this pattern a central team is building a data pipeline framework of sorts, that can be further configured by a stakeholder when needed.&lt;/p&gt;
&lt;p&gt;A stakeholder is determining the source of the data, types of transformations that should be performed on it and the location of where the results are to be stored. In our case, it is happening using pull requests to a central repository. After a PR is reviewed and merged, a continuous integration process is triggered and the pipeline framework is adding a new dataset to the list of already processed ones.&lt;/p&gt;
&lt;p&gt;This approach is not limited to ETL, though. At Zalando we have deployed similar mechanisms for metadata management (mostly for making dataset schemas and security classifications of dataset attributes updatable by interested teams) and infrastructure management. Using a pull request, you can, for example, request a new Databricks Spark cluster, fix incorrect metadata if you find a mistake or request access to particular datasets.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;[Q] Thank you for your time.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Thank you as well. I am looking forward to presenting at the Data Innovation Summit this year!&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Salary Transparency is a Good Thing</title>
    <link href="https://gancarski.pl/writing/salary-transparency-is-a-good-thing---815caeb/"/>
    <updated>2020-08-21T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/salary-transparency-is-a-good-thing---815caeb/</id>
    <content type="html">&lt;p&gt;If greater salary transparency “causes resentment”, as a common argument goes, there is something wrong with the motivation system at your company. A healthy work culture cannot thrive on the lack of transparency about a fundamental parameter of the relationship with your employer.&lt;/p&gt;
&lt;p&gt;The so called “salary based on value” approach in a market with strong information asymmetry is mostly a myth that does not take into account non-value related factors. You can be underpaid even if you overachieve.&lt;/p&gt;
&lt;p&gt;If you are a highly competent introvert with insufficient negotiation skills, knowing what others earn (or offer) will level the playing field during a process that is skewed against you.&lt;/p&gt;
&lt;p&gt;On the other hand, if you are already paid well above the average, more transparency will not hurt you. Companies know exactly what compensation distributions look like - after all they are the ones developing salary policies and doing extensive research on their competition. They will continue valuing you regardless of whether your colleagues know how much you earn.&lt;/p&gt;
&lt;p&gt;If you are hesitant about discussing it openly, please contribute your salary information to anonymous databases and salary reports. Bring more transparency to the market. Reduce information asymmetry. It will benefit everyone in the long run.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>AI Whisperers</title>
    <link href="https://gancarski.pl/writing/ai-whisperers---0987526/"/>
    <updated>2022-09-03T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/ai-whisperers---0987526/</id>
    <content type="html">&lt;p&gt;The emergence of generative models is already &lt;a href=&quot;https://www.theverge.com/2022/9/2/23326868/dalle-midjourney-ai-promptbase-prompt-market-sales-artist-interview&quot;&gt;causing people to specialize in “AI whispering”&lt;/a&gt;, or being able to come up with prompts that result in consistently interesting and aesthetically pleasing works. It is both an exciting and a worrying trend, depending on whether you are a tech enthusiast or a struggling visual artist.&lt;/p&gt;
&lt;p&gt;This process is not limited to what is traditionally called “art”. After all, GitHub Copilot is already a thing, writing useful code based solely on its users’ comments.&lt;/p&gt;
&lt;p&gt;But where does this lead us, exactly? How far are we from the moment when the rise in productivity outpaces the growth of demand for new code, ultimately resulting in generally lower demand for software engineers?&lt;/p&gt;
&lt;p&gt;Automation in software engineering is nothing new. To the contrary - it has been at its heart from the very beginning. Decade after decade, programmers have been spending significant amounts of time automating their own work, while sitting on the shoulders of taller and taller giants.&lt;/p&gt;
&lt;p&gt;Cloud providers streamline infrastructure provisioning. Libraries for code generation and various techniques of metaprogramming spare us from manual creation of boilerplate code that is necessary but also agonizingly tedious to write. Faster query engines, improved algorithms, new frameworks - up until now they have only increased the demand for new code by lowering its cost, making it easier for software to devour more and more professions, supply chains, research, science, administration and every other aspect of the complex society we live in.&lt;/p&gt;
&lt;p&gt;They also helped create thousands of well-paid, cushy software jobs.&lt;/p&gt;
&lt;p&gt;Is it going to be different this time? Will the giants grow so fast, we won’t be able to climb them anymore? How long before we are able to materialize a complex system, including its auxiliary elements like unit and integration tests or API and infrastructure definitions, by merely specifying it on a very high level?&lt;/p&gt;
&lt;p&gt;“…it has to process, at minimum, 500k messages per second, arriving from all the sources mentioned in the previous paragraph, with at-least-once semantics and p99 latency below 10ms, while deduplicating incoming data…”&lt;/p&gt;
&lt;p&gt;[press “Generate and deploy” to proceed]&lt;/p&gt;
&lt;p&gt;(I encourage you to read the linked interview, it is really interesting)&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Tiny Brains Playing Pong</title>
    <link href="https://gancarski.pl/writing/tiny-brains-playing-pong---6af9638/"/>
    <updated>2022-10-15T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/tiny-brains-playing-pong---6af9638/</id>
    <content type="html">&lt;p&gt;A group of scientists at Cortical Labs connected an artificially grown layer of around 800 thousand human neurons to a computer, &lt;a href=&quot;https://www.npr.org/sections/health-shots/2022/10/14/1128875298/brain-cells-neurons-learn-video-game-pong&quot;&gt;creating a rudimentary bio-digital intelligence&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;From the article:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A layer of living neurons is grown on a special silicon chip at the bottom of a thumb-size dish filled with nutrients. The chip, which is linked to a computer, can both detect electrical signals produced by the neurons, and deliver electrical signals to them.&lt;/p&gt;
&lt;p&gt;To test the learning ability of the cells, the computer generated a game of Pong, a two-dimensional version of table tennis that gained a cult following as one of the first and most basic video games.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;At first, the cells didn’t understand the signals coming from the computer, or know what signals to send the other direction. They also had no reason to play the game.&lt;/p&gt;
&lt;p&gt;So the scientists tried to motivate the cells using electrical stimulation: a nicely organized burst of electrical activity if they got it right. When they got it wrong, the result was a chaotic stream of white noise.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;The approach worked. Cells began to learn to generate patterns of electrical activity that would move the paddle in front of the ball, and gradually rallies got longer.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;And the level of play was remarkable, considering that each network contained fewer cells than the brain of a cockroach, Kagan says.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;As an avid fan of hard science-fiction, and a complete ignoramus in the area of neuroscience (neurocomputing?), I wonder about potential scientific, technological and ethical implications of further research in this direction.&lt;/p&gt;
&lt;p&gt;Faster learning bio-mechanical robots? Intelligent prosthetics figuring out how to correctly react to inputs from our brains? Direct silicon extensions to human cognition?&lt;/p&gt;
&lt;p&gt;What about the ethics of experimenting on artificially grown networks of human neurons that are much more complex than those of very simple animals? To grossly oversimplify, once we reach one billion neurons, we will approach the level of a typical bird, while leaving cats behind.&lt;/p&gt;
&lt;p&gt;The full report, “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world”, &lt;a href=&quot;https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6&quot;&gt;is also openly available&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>The Coming Wave</title>
    <link href="https://gancarski.pl/writing/the-coming-wave---857c537/"/>
    <updated>2023-01-07T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/the-coming-wave---857c537/</id>
    <content type="html">&lt;p&gt;I guess I should finally write something about data engineering or building software in general, but I simply cannot stop interacting with generative models like ChatGPT or Midjourney and thinking about them. I keep coming up with more and more ridiculous prompts, only to be surprised by the results once again.&lt;/p&gt;
&lt;p&gt;The implications of what is coming are disruptive in a spectacular way, for good and bad. Which jobs will survive? In what form? What new professions will emerge? How will this nascent technology develop and help us create an even slightly better world to live in?&lt;/p&gt;
&lt;p&gt;Can we even realistically regulate their use?&lt;/p&gt;
&lt;p&gt;(I think we can and we have to.)&lt;/p&gt;
&lt;p&gt;The truth is, we don’t know what is going to happen. We know it is a huge wave, but we are limited to educated guesses about its direction and impact. The best we can do is to keep thinking about a world in which advanced generative AIs are ubiquitous. Where they continuously learn and talk to each other, improving themselves further. And then imagine our place in it.&lt;/p&gt;
&lt;p&gt;We are merely at the beginning. The models that are currently at our disposal are already impressive in many ways, even if sometimes their output is nothing more than convincing-sounding nonsense. They will only become better, and fast, at what they do, especially when trained on large corpora of specialized knowledge.&lt;/p&gt;
&lt;figure&gt;
              &lt;img src=&quot;https://gancarski.pl/assets/images/abstract-ai.jpg&quot; alt=&quot;The dissolution of self, bright colours, postmodernist. Generated using Midjourney.&quot; /&gt;
              &lt;figcaption&gt;The dissolution of self, bright colours, postmodernist. Generated using Midjourney.&lt;/figcaption&gt;
            &lt;/figure&gt;
</content>
  </entry>
  <entry>
    <title>Work Less to Achieve More</title>
    <link href="https://gancarski.pl/writing/work-less-to-achieve-more---6202be8/"/>
    <updated>2023-07-27T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/work-less-to-achieve-more---6202be8/</id>
    <content type="html">&lt;p&gt;&lt;em&gt;“If you want to achieve something, you need to have enough grit to stay up late and put in 60-hour long work weeks”&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Do we, though? After all, we are in for more than forty years of professional activity. Think about them as forty kilometers of a typical marathon run. Do successful long-distance runners burn most of their energy at the beginning? No, they adjust the pace by considering the entire stretch they still need to complete.&lt;/p&gt;
&lt;p&gt;What is more, unlike a predetermined route of a marathon, our career paths are likely to change due to sudden shifts in market conditions, so we had better preserve some of our energy and precious mental resources to be able to adjust when necessary. This requires plenty of free time to focus and experiment, and, generally, peace of a well-rested mind.&lt;/p&gt;
&lt;p&gt;Quite predictably, as yet another study suggests, working less makes it easier to avoid burnout while being &lt;a href=&quot;https://www.businessinsider.com/four-day-workweek-companies-profits-workers-happier-efficient-takano-2023-7&quot;&gt;more engaged and productive in the long run&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The results are in: A four-day workweek pays off for workers and their workplaces, and one lawmaker says that means it’s time for 32 hours on the clock to become law.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;Workers were more efficient, even as work intensity dipped. They worked less, and were able to better maintain their work-life balance. Revenue at firms participating grew by 15%, and a third of employees said they were less likely to leave their jobs.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;Takano is not the only lawmaker pushing for Americans to work fewer hours in each week. In February, Vermont Sen. Bernie Sanders wrote on Twitter that &amp;quot;with exploding technology and increased worker productivity, it’s time to move toward a four-day work week with no loss of pay. Workers must benefit from technology, not just corporate CEOs.&lt;/p&gt;
&lt;p&gt;He was referring to the pilot program’s December findings six months in, after which revenue among participating companies rose 8.14%, and 67% of employees reported feeling less burned-out, with the extra day allowing them to exercise and sleep more.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In general, a growing body of evidence is pointing towards the conclusion that long term, working less would be better for everyone involved, be it the employee or the employer. You will find some of it, including various studies and articles, in the one quoted and linked above.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Psychological Safety in the Workplace</title>
    <link href="https://gancarski.pl/writing/psychological-safety-in-the-workplace---1bead2e/"/>
    <updated>2023-08-28T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/psychological-safety-in-the-workplace---1bead2e/</id>
    <content type="html">&lt;p&gt;Looking back at the professional choices I have made so far, I realized I have always gravitated towards companies and teams that provided:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;an enticing technical challenge (without going into the details of what it means)&lt;/li&gt;
&lt;li&gt;a psychologically safe environment in which I felt free and supported to pursue solutions to that challenge&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What is significant is that while the challenge has always been merely a necessary condition, the safety alone was sufficient for me to perform better as a team member and an individual contributor.&lt;/p&gt;
&lt;p&gt;I have been a member of teams in which I did my best even if the challenge itself was not that compelling. Conversely, I have spent time in environments that provided fantastic challenges that I couldn’t tackle to the best of my ability, because I did not feel safe enough.&lt;/p&gt;
&lt;p&gt;From a business perspective, focusing on psychological safety is not just a way to create more humane working conditions (although if we can, then why the hell wouldn’t we want to do o). Lowering the emotional cost of making mistakes or experiencing outright failure, will lead to better outcomes through increased trust and cooperation, less burnout and greater willingness to openly test one’s limitations. When done well, it may even &lt;a href=&quot;https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-psychological-safety&quot;&gt;result in a cycle of positive feedback&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Once a safe and supportive team climate has been established, a challenging leadership style can sometimes further strengthen psychological safety. A challenging leader asks team members to reexamine assumptions about their work and how they can exceed expectations and fulfill their potential. Challenging leadership styles have been linked with increased employee creativity and desire to improve.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It may sound trite, but in order for someone to get out of their comfort zone to achieve better results, the comfort zone must be established in the first place.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Lethal Regression</title>
    <link href="https://gancarski.pl/writing/lethal-regression---613a0be/"/>
    <updated>2024-07-24T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/lethal-regression---613a0be/</id>
    <content type="html">&lt;p&gt;Seeing all kinds of vastly different statistical models being bundled under the umbrella term of “AI” is as frustrating as witnessing anything that is not a relational database with SQL being called “NoSQL”&lt;sup class=&quot;footnote-ref&quot;&gt;&lt;a href=&quot;https://gancarski.pl/writing/lethal-regression---613a0be/#fn1&quot; id=&quot;fnref1&quot;&gt;[1]&lt;/a&gt;&lt;/sup&gt;. At this level of generality, the term loses enough meaning to sabotage almost any conversation about it, while promoting sensationalized pieces that induce more anxiety than understanding.&lt;/p&gt;
&lt;p&gt;“Is AI going to kill us all?” Well, a linear regression in an Excel spreadsheet probably won’t do that, unless what is implied by its coefficients causes a heart attack.&lt;/p&gt;
&lt;p&gt;But what about an agent model that can make decisions and use an LLM to interact with the outside world? What if it is embedded in a robot body and can use another neural network to control it? What if there are millions of them, silently exchanging messages and conspiring about how to turn the Solar System into a cloud of grey nanogoo&lt;sup class=&quot;footnote-ref&quot;&gt;&lt;a href=&quot;https://gancarski.pl/writing/lethal-regression---613a0be/#fn2&quot; id=&quot;fnref2&quot;&gt;[2]&lt;/a&gt;&lt;/sup&gt;?&lt;/p&gt;
&lt;p&gt;We don’t really know. And we won’t know until we try building and running systems like that for a prolonged period.&lt;/p&gt;
&lt;p&gt;OK, but “will LLMs evolve into AGI?&lt;sup class=&quot;footnote-ref&quot;&gt;&lt;a href=&quot;https://gancarski.pl/writing/lethal-regression---613a0be/#fn3&quot; id=&quot;fnref3&quot;&gt;[3]&lt;/a&gt;&lt;/sup&gt;”. No idea. What is “AGI”? We have not even managed to agree on a workable definition so far. Should it be human-like or just general? Should it be more general than an average human? Or more general than the sum of all humans and therefore not human anymore?&lt;/p&gt;
&lt;p&gt;By the way, it is called “ASI”&lt;sup class=&quot;footnote-ref&quot;&gt;&lt;a href=&quot;https://gancarski.pl/writing/lethal-regression---613a0be/#fn4&quot; id=&quot;fnref4&quot;&gt;[4]&lt;/a&gt;&lt;/sup&gt; now. Keep up, please!&lt;/p&gt;
&lt;p&gt;In this chaotic conversational landscape it is far too easy to be overinfluenced by the merchants of hype and the prophets of doom alike. Instead, we need more precision for the masses. Definitions that are understandable, with clear boundaries. Articles and videos popularising the discipline with careful communication about which claims are warranted and which not, and under which assumptions.&lt;/p&gt;
&lt;p&gt;One such video, by &lt;a href=&quot;https://www.linkedin.com/in/jodieburchell/&quot;&gt;Jodie Burchell, PhD&lt;/a&gt;, is embedded below. It is a presentation about the strengths and limitations of LLMs, recorded at &lt;a href=&quot;https://gotopia.tech/conferences/79/goto-amsterdam-2024&quot;&gt;GOTO Amsterdam 2024&lt;/a&gt;. It is clear, precise, informative and digestible to the general audience.&lt;/p&gt;
&lt;p&gt;I recommend it through and through.&lt;/p&gt;
&lt;div style=&quot;position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; margin: 2rem 0;&quot;&gt;
              &lt;iframe style=&quot;position: absolute; top: 0; left: 0; width: 100%; height: 100%;&quot; src=&quot;https://www.youtube-nocookie.com/embed/Pv0cfsastFs&quot; title=&quot;Beyond the Hype: A Realistic Look at Large Language Models • Jodie Burchell • GOTO 2024&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot;&gt;
              &lt;/iframe&gt;
            &lt;/div&gt;
&lt;section class=&quot;footnotes&quot;&gt;
&lt;ol class=&quot;footnotes-list&quot;&gt;
&lt;li id=&quot;fn1&quot; class=&quot;footnote-item&quot;&gt;&lt;p&gt;“NoSQL” may mean a document store, a graph triple store, a key-value store, a vector database etc. Some of them will implement a (limited) variant of SQL anyway, while not being relational in the sense PostgreSQL or MySQL are. &lt;a href=&quot;https://gancarski.pl/writing/lethal-regression---613a0be/#fnref1&quot; class=&quot;footnote-backref&quot;&gt;↩︎&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&quot;fn2&quot; class=&quot;footnote-item&quot;&gt;&lt;p&gt;Wikipedia has a &lt;a href=&quot;https://en.wikipedia.org/wiki/Gray_goo&quot;&gt;good article&lt;/a&gt; about so called “grey goo”. &lt;a href=&quot;https://gancarski.pl/writing/lethal-regression---613a0be/#fnref2&quot; class=&quot;footnote-backref&quot;&gt;↩︎&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&quot;fn3&quot; class=&quot;footnote-item&quot;&gt;&lt;p&gt;“Artificial General Intelligence” &lt;a href=&quot;https://gancarski.pl/writing/lethal-regression---613a0be/#fnref3&quot; class=&quot;footnote-backref&quot;&gt;↩︎&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&quot;fn4&quot; class=&quot;footnote-item&quot;&gt;&lt;p&gt;“Artificial Super Intelligence” &lt;a href=&quot;https://gancarski.pl/writing/lethal-regression---613a0be/#fnref4&quot; class=&quot;footnote-backref&quot;&gt;↩︎&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/section&gt;
</content>
  </entry>
  <entry>
    <title>The Ostrich Jockey</title>
    <link href="https://gancarski.pl/writing/the-ostrich-jockey---a09962c/"/>
    <updated>2024-08-01T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/the-ostrich-jockey---a09962c/</id>
    <content type="html">&lt;p&gt;Take a quick look at the picture a friend of mine shared on social media:&lt;/p&gt;
&lt;figure&gt;
              &lt;img src=&quot;https://gancarski.pl/assets/images/vintage-ostrich-jockey.jpg&quot; alt=&quot;An Ostrich Jockey.&quot; /&gt;
              &lt;figcaption&gt;An Ostrich Jockey.&lt;/figcaption&gt;
            &lt;/figure&gt;
&lt;p&gt;Now be honest - did you assume it was an output from a generative model? I certainly asked myself if it was.&lt;/p&gt;
&lt;p&gt;After spending a moment to scrutinize my reaction, I started thinking about what it meant for the trust I still put in images and videos I see in the media, both traditional and social.&lt;/p&gt;
&lt;p&gt;The cost of generating compelling fakes is going to only decrease, drastically. In a year (or five - it doesn’t really matter), we will be able to create a fictional, realistic depiction of anything we can imagine, completely offline, on our phones, just for the creative fun of it.&lt;/p&gt;
&lt;p&gt;Unfortunately, the same technology is going to supercharge generation of misinformation on an industrial scale.&lt;/p&gt;
&lt;p&gt;Forget about obviously fake Twitter accounts or bot farms pumping nonsense directly into Reddit’s comment sections. Think big - like entire, professionally looking “news sites” with made up videos, pictures, articles, user comments and author bios linking to equally bogus LinkedIn, Twitter, Facebook and Instagram profiles. Targeting real users with tailored feeds based on data extracted from their social media accounts.&lt;/p&gt;
&lt;p&gt;Add quality branding, on the level of that used by CNN or BBC, and have everything generated based on a set of prompts, on demand, according to the desired style and narrative, fully believable and without a shred of authenticity.&lt;/p&gt;
&lt;p&gt;Imagine large scale, fully automated misinformation, orders of magnitude grander and more persuasive than what we are already facing.&lt;/p&gt;
&lt;p&gt;Without public, open processes for establishing a chain of custody and trust, it is going to be increasingly difficult to maintain the good skepticism that helps us question our sources and think independently, and easier to be taken over by the bad kind - the one that leads to questioning of absolutely everything. Leaving us without a sense of shared reality, not able to make a sound judgement about what is real and what is not.&lt;/p&gt;
&lt;p&gt;I am not sure what solutions could there be. A decentralized repository of vetted digital assets, maintained by a federation of public institutions (archives, libraries, universities) and media companies would be an option. The underlying technology could involve some implementation of a blockchain-based ledger, but this is besides the point. Most importantly, if we are to make it, it will require cooperation, technology and collective will to establish, by legal custom or regulatory enforcement of standards that define what constitutes a trusted source &lt;sup class=&quot;footnote-ref&quot;&gt;&lt;a href=&quot;https://gancarski.pl/writing/the-ostrich-jockey---a09962c/#fn1&quot; id=&quot;fnref1&quot;&gt;[1]&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;By the way, the picture is authentic, and depicts an ostrich jockey on a street in Brussels, in 1933. Originally black and white, it was &lt;a href=&quot;https://www.instagram.com/p/C3fAAhvt9nU/&quot;&gt;professionally colorized&lt;/a&gt;.&lt;/p&gt;
&lt;section class=&quot;footnotes&quot;&gt;
&lt;ol class=&quot;footnotes-list&quot;&gt;
&lt;li id=&quot;fn1&quot; class=&quot;footnote-item&quot;&gt;&lt;p&gt;The issue is not new, also in the context of blockchain: https://www.frontiersin.org/journals/blockchain/articles/10.3389/fbloc.2024.1306058/full &lt;a href=&quot;https://gancarski.pl/writing/the-ostrich-jockey---a09962c/#fnref1&quot; class=&quot;footnote-backref&quot;&gt;↩︎&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/section&gt;
</content>
  </entry>
  <entry>
    <title>Don&#39;t Delegate Thinking</title>
    <link href="https://gancarski.pl/writing/dont-delegate-thinking---67ef209/"/>
    <updated>2024-10-29T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/dont-delegate-thinking---67ef209/</id>
    <content type="html">&lt;p&gt;“Start a post, try writing with AI”, a prompt on LinkedIn invites.&lt;/p&gt;
&lt;p&gt;Thanks, LinkedIn, but I would rather not.&lt;/p&gt;
&lt;p&gt;I write when I feel the need to construct an argument, to tell a story, to test whether a chain of reasoning makes sense. Putting it down on a piece of paper and reading it back to myself after a moment is the first assessment of its quality.&lt;/p&gt;
&lt;p&gt;This is why I need to do the writing myself. If I cannot express it, I cannot explain it. And if I cannot explain it, I don’t really understand it. The clarity of thought I achieved is probably insufficient.&lt;/p&gt;
&lt;p&gt;Delegate your writing to a model and you will delegate your thinking to a model. And if you delegate your thinking - its quality will slowly deteriorate, like an muscle in atrophy.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Premature Standardization</title>
    <link href="https://gancarski.pl/writing/premature-standardization---c217d22/"/>
    <updated>2024-11-06T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/premature-standardization---c217d22/</id>
    <content type="html">&lt;p&gt;Since software teams facing similar challenges develop similar solutions, they will naturally accumulate redundant software artifacts. Data pipelines, queries, scripts, pieces of infrastructure, libraries, data definitions will repeat, or at least “rhyme” across verticals and business units.&lt;/p&gt;
&lt;p&gt;They will also contain localized knowledge, experience, and valuable insights into the particular needs of their creators - needs that are not universal across the organization.&lt;/p&gt;
&lt;p&gt;Standardization of tooling, lean technology stacks, common deployment infrastructure or centralized data definitions reduce total complexity and cost of software systems. However, they also introduce additional dependencies between teams, limiting the scope of decisions they can make autonomously.&lt;/p&gt;
&lt;p&gt;Instead of trying to address redundancy right away, it is good to take a holistic inventory of what has been built so far, and assess whether there is enough value in consolidation. After all, team autonomy is valuable as well, and consolidation will have technical and organizational costs which are yet to be discovered.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Disconnect to Reconnect</title>
    <link href="https://gancarski.pl/writing/disconnect-to-reconnect---3493f5a/"/>
    <updated>2024-11-26T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/disconnect-to-reconnect---3493f5a/</id>
    <content type="html">&lt;p&gt;More and more often, I disconnect to reconnect.&lt;/p&gt;
&lt;p&gt;I leave my phone at home and go for an evening walk. While walking, I listen to music from a simple MP3 player.&lt;/p&gt;
&lt;p&gt;I observe people around me, noticing new details in the environment I know so well. I sit on a bench by the river and meditate, while a myriad of lights on the other side paints reflections on the oily surface of slowly moving water.&lt;/p&gt;
&lt;p&gt;I disconnect from the noise of social media microdosing us with dopamine and cortisol. From market updates, or a sudden thought about a technical challenge waiting at work. From news about Tik-Tok influencing political outcomes of an election, somewhere.&lt;/p&gt;
&lt;p&gt;I disconnect from the buzz to reconnect with myself, with the surroundings, with the moment I am part of.&lt;/p&gt;
&lt;p&gt;It has become a habit, both simple and helpful. It clears my mind, reminds me of who I am and helps me realize what is important now and what can wait. It also breeds new ideas, recharges my energy and brings back the feeling of being in control.&lt;/p&gt;
&lt;p&gt;Take care of yourselves - disconnect to reconnect.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Async Reads #001</title>
    <link href="https://gancarski.pl/writing/async-reads-001---2875c77/"/>
    <updated>2026-01-05T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/async-reads-001---2875c77/</id>
    <content type="html">&lt;p&gt;&lt;em&gt;“Async Reads” collects writing I find worth sharing, for one reason or another. An article being included here does not imply my endorsement (or lack thereof) of the author or their opinions. It only reflects a very broad, subjective measure of quality of the writing itself.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;(1) “Jevons Paradox for Knowledge Work”&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://x.com/levie/status/2004654686629163154&quot;&gt;full article&lt;/a&gt; by &lt;a href=&quot;https://www.linkedin.com/in/boxaaron/&quot;&gt;Aaron Levie&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Why making something less costly leads to output explosion rather than time savings:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;By making it far cheaper to take on any type of task that we can possibly imagine, we’re ultimately going to be doing far more. The vast majority of AI tokens in the future will be used on things we don’t even do today as workers: they will be used on the software projects that wouldn’t have been started, the contracts that wouldn’t have been reviewed, the medical research that wouldn’t have been discovered, and the marketing campaign that wouldn’t have been launched otherwise.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;(2) “On Facing Extinction (Again)”&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://dev.to/annaspies/on-facing-extinction-again-23o5&quot;&gt;full article&lt;/a&gt; by &lt;a href=&quot;https://www.linkedin.com/in/annaspysz/&quot;&gt;Anna Spysz&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;On the decline of journalism as a profession and (possible) parallels with software engineering:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Newspapers used to have subject matter experts, fact checkers, embedded reporters, and probably another dozen roles I’d never heard of because they were gone by the time I got into the trade. Similarly, an engineering team has frontend and backend experts, devOps, appsec engineers, support engineers (ideally), UX, and so on - not to mention doc writers, developer advocates, etc., which you don’t need if you’re just building an app for your local restaurant, but presumably some people will still make software they want to sell. Will we get to a place where those experts only exist in dying institutions that we call FAANG today?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;(3) “Reflections on Vibe Researching”&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://joshuagans.substack.com/p/reflections-on-vibe-researching&quot;&gt;full article&lt;/a&gt; by &lt;a href=&quot;https://www.linkedin.com/in/joshua-gans-707bb4/&quot;&gt;Joshua Gans&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;On opportunities and perils of LLM-assisted research:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;AI is really useful, and the latest models leave o1-pro well in the dust. It definitely accelerates research. But at the same time, it has only made me more cognisant of the human factor in research. By shutting people (including myself) out of the research process, I left myself open to pushing lower-quality ideas, which the review process itself clearly surfaced.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;(4) “The rise of the disinformation-for-hire industry”&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://euvsdisinfo.eu/the-rise-of-the-disinformation-for-hire-industry/&quot;&gt;full article&lt;/a&gt; by &lt;a href=&quot;https://www.linkedin.com/company/euvsdisinfo/&quot;&gt;EUvsDisinfo&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The consequences of outsourced, mass-produced misinformation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The emergence of (…) influence-for-hire firms has created a new strategic imbalance – asymmetrical information warfare.&lt;/p&gt;
&lt;p&gt;In this asymmetry, autocracies enjoy maximum reach with minimal risk. At home, they are protected by censorship, control, and deniability. Democracies, however, are more exposed. Bound by transparency and law, they face maximum vulnerability with limited defences.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
  </entry>
  <entry>
    <title>Async Reads #002</title>
    <link href="https://gancarski.pl/writing/async-reads-002---bd9c1ac/"/>
    <updated>2026-01-12T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/async-reads-002---bd9c1ac/</id>
    <content type="html">&lt;p&gt;&lt;em&gt;“Async Reads” collects writing I find worth sharing, for one reason or another. An article being included here does not imply my endorsement (or lack thereof) of the author or their opinions. It only reflects a very broad, subjective measure of quality of the writing itself.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;(1) “Don’t Fall Into the Anti-AI Hype”&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Salvatore_Sanfilippo&quot;&gt;Salvatore Sanfilippo&lt;/a&gt;, the creator of &lt;a href=&quot;https://redis.io/&quot;&gt;Redis&lt;/a&gt;, on letting go of hand crafted code and &lt;a href=&quot;https://antirez.com/news/158&quot;&gt;the economic disruption soon to ensue:&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It is simply impossible not to see the reality of what is happening. Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too). It does not matter if AI companies will not be able to get their money back and the stock market will crash. All that is irrelevant, in the long run. It does not matter if this or the other CEO of some unicorn is telling you something that is off putting, or absurd. Programming changed forever, anyway.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;As a programmer, I want to write more open source than ever, now. I want to improve certain repositories of mine abandoned for time concerns. I want to apply AI to my Redis workflow. Improve the Vector Sets implementation and then other data structures, like I’m doing with Streams now.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;What is the social solution, then? Innovation can’t be taken back after all. I believe we should vote for governments that recognize what is happening, and are willing to support those who will remain jobless. And, the more people get fired, the more political pressure there will be to vote for those who will guarantee a certain degree of protection. But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition, which is not always happy.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;(2) “The Rest of the World Disappears: Claire Voisin on Mathematical Creativity”&lt;/h2&gt;
&lt;p&gt;From an interview with French mathematician &lt;a href=&quot;https://en.wikipedia.org/wiki/Claire_Voisin&quot;&gt;Claire Voisin&lt;/a&gt;, about getting lost in &lt;a href=&quot;https://www.quantamagazine.org/a-mathematician-on-creativity-art-logic-and-language-20240313/&quot;&gt;the art and language of mathematics:&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;There’s the magic of a proof — the emotion you feel when you understand it, when you realize how strong it is and how strong it makes you. As a child, I could already see this. And I enjoyed the concentration that mathematics requires. It’s something that, getting older, I find more and more central to the practice of mathematics. The rest of the world disappears. Your whole brain exists to study a problem. It’s an extraordinary experience, one that’s very important to me — to make yourself leave the world of practical things, to inhabit a different world. Maybe this is why my son enjoys playing video games so much.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;You could compare a mathematical theorem to a poem. It is written in words. It’s a product of language. We only have our mathematical objects because we use language, because we use everyday words and give them a specific meaning. So you can compare poetry and mathematics, in that they both completely rely on the language but still create something new.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;It’s important to become familiar with the object you study, to the point that for you it’s like a native language. When a theory is beginning to form, it takes time to figure out the right definitions, and to simplify everything. Or maybe it is still very complicated, but we become much more familiar with the definitions and objects; it becomes more natural to use them.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
  </entry>
  <entry>
    <title>Notes From Letting Go</title>
    <link href="https://gancarski.pl/writing/notes-from-letting-go---320f256/"/>
    <updated>2026-02-15T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/notes-from-letting-go---320f256/</id>
    <content type="html">&lt;p&gt;I tend to be more excited about general principles than particular pieces of technology - which sometimes makes me a rather late adopter. This is why, after some frustrating earlier experiments, I have been using Claude Code seriously for merely three months. I have done it privately and as part of a pilot program at work.&lt;/p&gt;
&lt;p&gt;Below you will find rough observations made during this time. Not a single one of them is about running autonomous agent swarms to implement a new operating system over a weekend while burning through thousands of euros’ worth of tokens. Instead, they are mostly about learning what’s meaningful and &lt;strong&gt;letting go of everything else.&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Claude allowed me to set up this website, based on &lt;a href=&quot;https://www.11ty.dev/&quot;&gt;Eleventy&lt;/a&gt; and &lt;a href=&quot;https://picocss.com/&quot;&gt;Pico CSS&lt;/a&gt;, during an otherwise quite busy weekend. While it is a small project, I have barely touched any code and used a code editor to observe what changes were applied. I challenged Claude to set everything up from scratch, customize the design, clean up the resulting code, and generate sample content to see what the website would look like. Apart from tiny manual tweaks, it just worked. It was still important to explicitly mention things like cache-busting URLs for CSS definitions, generating an RSS feed or treating URLs as a human-readable linkable interface that needs to be designed rather than left to chance, but otherwise low-level choices were completely offloaded to Claude.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I asked Claude to build utility scripts to automate typical tasks like spellchecking and deployment. I found minor mistakes in them, but instead of fixing them myself, I let Claude do that. It did so with ease. Overall, repeatable, well-defined tasks are still best performed using deterministic code, but the code itself can be developed and documented by Claude as well. Doing so creates a positive feedback loop, in which Claude creates automation tools enabling it to be more effective. No need to burn through tokens when local CPU cycles will do just fine.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For simple projects, creating &lt;code&gt;CLAUDE.md&lt;/code&gt; or &lt;code&gt;AGENTS.md&lt;/code&gt; is overkill. A well-structured &lt;code&gt;README&lt;/code&gt; that is clear to a human and to an agent is enough. Claude can generate it as well, and then refine and update it by comparing it with the current state of the project and what’s still relevant in the session context.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Claude can help troubleshoot issues related to software deployments and infrastructure, provided it has access to the right tooling. Since it is particularly good at using CLIs, it can correlate your code with outputs from multiple tools, like &lt;code&gt;kubectl&lt;/code&gt; or &lt;code&gt;az&lt;/code&gt; (Azure’s CLI), and suggest what the issue is and how to resolve it. It is even more powerful when augmented with MCP servers that can provide it with additional context about your infrastructure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I routinely ask Claude for richly referenced research on topics ranging from database internals to social sciences to psychology. It is becoming my ad hoc Wikipedia and brainstorming partner. However, I still use both and make sure the references it offers are not hallucinated.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The general feeling is that of liberation&lt;/strong&gt; - it is both easier to go deeper and delegate more than before. I work more on my hobby projects because the emotional barrier to entry is now much lower. Even without accessing my code, I can at least discuss and plan a prototype while commuting. This is also one of the advantages of living in a city boasting an expansive network of public transit - you can plan and brainstorm while on the move.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In professional environments, there is a clear incentive to prioritize speed at the expense of understanding. Please try to avoid doing this. Make an effort to understand what the agent created. Ask it to guide you through the code base. Prompt it to create documentation and then read it. This code is still your responsibility, and if you lose track of what is generated, your mental model of the system you take care of will deteriorate.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learning while vibe coding instead of completely delegating requires a bit of discipline and curiosity, but also good boundary setting. You will need to decide what not to spend time learning and analyzing, as there is always more to learn than time allows. Getting through boilerplate is much faster, and iterations shrink to minutes, so it is easy to get lost in a rapid improvement loop without reflection. &lt;strong&gt;Pause whenever you feel uneasy about all of it happening too fast. It is your curiosity telling you it needs to be fed, so feed it.&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;And one more thing - a story rather than a comment. I have seen a friend of mine vibe code an ERP system for his family business, from scratch. He has no background in software development. He learned enough to know he could deploy it on &lt;a href=&quot;https://vercel.com/&quot;&gt;Vercel&lt;/a&gt; and &lt;a href=&quot;https://supabase.com/&quot;&gt;Supabase&lt;/a&gt;, and used Codex to develop, test, and review it. Out of curiosity, I checked the code base for basic flaws like incorrect handling of credentials, but found nothing of concern. (Granted, this was not a sweeping audit, but more like a focused check - we will do more of it in the near future.)&lt;/p&gt;
&lt;p&gt;Initially, my friend thought he had implemented login using OAuth, but he didn’t. It was a classic username/password flow based on the application code and database with user passwords (hashed and salted properly). I explained to him what adding actual OAuth would mean.&lt;/p&gt;
&lt;p&gt;He texted me the next day to confirm it was done.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;There is no way back&lt;/strong&gt;, and I like it this way. Even if the economics of vendor-hosted coding agents collapses at some point, we will deploy them locally if we are willing to fully internalize the cost. We may also swing too far into the realm of letting go, leaving us exposed and having to clean up the results ourselves.&lt;/p&gt;
&lt;p&gt;No matter what, though - we will not go back to the world before agent-assisted and agent-directed software development.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Async Reads #003</title>
    <link href="https://gancarski.pl/writing/async-reads-003---25e0d36/"/>
    <updated>2026-03-08T00:00:00Z</updated>
    <id>https://gancarski.pl/writing/async-reads-003---25e0d36/</id>
    <content type="html">&lt;p&gt;&lt;em&gt;“Async Reads” collects writing I find worth sharing, for one reason or another. An article being included here does not imply my endorsement (or lack thereof) of the author or their opinions. It only reflects a very broad, subjective measure of quality of the writing itself.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;(1) “Polymaths are back from the dead”&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://substack.com/@erikhoel&quot;&gt;Erik Hoel&lt;/a&gt; on the &lt;a href=&quot;https://www.theintrinsicperspective.com/p/polymaths-are-back-from-the-dead&quot;&gt;possible renaissance of polymathy&lt;/a&gt;, driven by generative models:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;To put it simply: there are two kinds of thinkers. Those rate-limited by expertise, and those rate-limited by creativity. Slowly but consistently, the rate-limiting factor for intellectual contribution has become ever deeper expertise.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;Now, I’m a known critic of AI. (…) But like with any new technology, you cannot be an honest critic if you cannot admit the positives. And I’ll admit that AI is a clear boon to polymaths and, more broadly, those more rate-limited by expertise than creativity. It favors the lone creators who have been, historically for decades now, buried amid collaborative teams. For this reason, I predict a new breed of polymaths who make use of AI to work across a far greater range than the previous generation (and specialists to be more individually productive).&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;Getting good at programming (which even at my peak wasn’t near professional level) was a trying inconvenience I had to overcome for what I actually wanted to do, which was science. If ChatGPT had been around when I was in graduate school, this barrier would have vanished in the puff of a $20 subscription, and I could have focused more on evaluation and coding tests—probably publishing twice as many papers. I had tons of ideas, and while I was indeed rate-limited by expertise, what was most annoying was that the lacking expertise wasn’t even in the domain that mattered.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;(2) “Learn in Public”&lt;/h2&gt;
&lt;p&gt;My good colleague &lt;a href=&quot;https://www.linkedin.com/in/hvoecking/&quot;&gt;Heye Vöcking&lt;/a&gt; has some &lt;a href=&quot;https://heye.dev/posts/learn-in-public-method--74hsyqc9h/&quot;&gt;wise words to share&lt;/a&gt; about the positive feedback loop between learning in public and teaching others:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Traditional learning:&lt;/strong&gt; You’re learning how LLMs choose the next token. You write an explanation on paper, pretending to teach it to an imaginary sixth-grader: “The AI looks at all possible words and picks the most likely one based on what it learned during training.” You realize you don’t understand the probability calculation, study more, and refine your explanation. The learning remains private.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Learning in public:&lt;/strong&gt; You follow the same process but document and publish that explanation on your blog, explicitly noting your confusion about the probability calculation. Now the magic happens: a machine learning engineer comments with a clearer explanation, someone shares a helpful visualization, and a student asks a question that reveals another gap. Your learning becomes collaborative.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;Each concept you explain becomes what researchers call a semantic node in a broader knowledge network. These artifacts must be machine-readable to maximize impact, just as search engines need to understand your content to rank it effectively, your learning artifacts need proper structure, clear terminology, and semantic markup (like a Wikipedia page the internet links to).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I encourage you to read &lt;a href=&quot;https://heye.dev/posts/introducing-semantic-public-learning--74jjna98d/&quot;&gt;the second part&lt;/a&gt; as well, expanding this concept with semantic markers and practices that enhance discoverability and shareability for humans and software systems.&lt;/p&gt;
&lt;h2&gt;(3) “Clawed - On Anthropic and the Department of War”&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.deanball.com/&quot;&gt;Dean W. Ball&lt;/a&gt; wrote a &lt;a href=&quot;https://www.hyperdimensional.co/p/clawed&quot;&gt;thorough piece&lt;/a&gt; on the potential death of the American Republic through the lens of the recent political attacks on Anthropic (for more context, read &lt;a href=&quot;https://www.anthropic.com/news/statement-department-of-war&quot;&gt;Anthropic’s take&lt;/a&gt; on the issue as well).&lt;/p&gt;
&lt;p&gt;There is a lot of nuance in Dean’s writing, and all of it warrants a careful read:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;At some point during my lifetime—I am not sure when—the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary “caused” death to begin, though all those things and more contributed.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;I am now going to write about a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;Here are the facts as I understand them: during the Biden Administration, the AI company Anthropic negotiated a deal with the Department of Defense (now known as the Department of War, hereafter referred to as DoW) for the use of the AI system Claude in classified contexts.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;Trump officials claim to have changed their mind not so much because they want to do mass surveillance on Americans or use autonomous lethal weapons imminently, but because they object altogether to the notion of privately imposed limitations on the military’s use of technology.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable.&lt;/p&gt;
&lt;p&gt;(…)&lt;/p&gt;
&lt;p&gt;But this is not what DoW did. Instead, DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
  </entry>
</feed>
