Château Pontet-Canet 2019

My pilgrimage through the landscape of French viticulture continues with another vintage of Château Pontet-Canet. I previously tasted the 2006 vintage, and though I enjoyed it, the 2019 provides an entirely different tasting experience that very closely matches my conceptual ideal of what a Bordeaux should be.

The 2019 Pontet-Canet bursts with black currant, plum, and dark chocolate. Notes of tobacco and cedar gently penetrate the base of dark fruit in the long finish. The wine possesses an uncanny lightness on the palate despite exhibiting intense, rich flavors. Interestingly, this wine's most unique aspect may be how it effortlessly navigates this apparent contradiction.

The tannins are slightly sharp and decoupled, but I’m not surprised, given the youth of this vintage. Given a decade in the cellar, the 2019 Pontet-Canet may be close to perfect.

Rethinking Biases: Concatenation and String Builder

Everybody knows that string builder classes are more efficient than concatenation, right? Statements like this are passed between generations of developers, quickly becoming common wisdom. Like all things in technology, rapid language evolution can render dated information irrelevant. So, in the context of Apex development, does this piece of common wisdom hold up? I was recently tasked with a project that required assembling a massive amount of string data into a large JSON payload — it presented the perfect opportunity to put this claim to the test. The answer? Well, it depends, but it was not what I expected. To tightly control as many variables as possible, I created a short code snippet to concatenate two identical strings of a precise size using both techniques. As anticipated, the string builder technique is faster while utilizing few CPU resources with large strings; however, basic concatenation wins in both speed and efficiency when used for smaller tasks.

5,000 Iterations 50,000 Iterations
Concatenation 241 ms 4715 ms
String Builder 445 ms 4614 ms

In fact, concatenation maintains a speed advantage up to a surprisingly high number of iterations. The chart below shows that concatenation holds the lead until just after 55,000 iterations! So, what's the verdict? Basic concatenation is faster under most circumstances. Only extremely large strings benefit from the string builder technique.

Chicago 2023

I had seen it hundreds of times, but now it was simultaneously familiar and foreign. It was as if I was looking at it for the first time. It shimmered as I walked around the room—the lights catching the deep contours of each brush stroke. His face possessed uncanny depth, and he appeared to come alive. His piercing gaze stared through me as if I didn't exist. The longer I stared into his eyes, the more expression I could glean from his gaunt face. He appeared profoundly sad with a tinge of resignation, hinting that this emotion was not unfamiliar. My experience at the Chicago Art Institute's Vincent van Gough exhibit echoed my broader sense of the trip; the city was familiar, but I viewed it as a stranger.

I aimed to look past the trivial details and capture broad shapes given form by the interplay with light—to distill the city's architecture to its essence.

Fratelli Giacosa Basarin Vigna Gianmaté 2015

I have a complicated relationship with the Fratelli Giacosa Basarin Vigna Gianmaté. After my first sip, I contemplated pouring the remainder of the bottle down the drain. Like any good narrative, this bottle contained a plot twist with an unexpected outcome. This Barbaresco is an excellent example of a high-quality wine that doesn't show well straight from the bottle but transforms after decanting. Immediately after opening, the wine is completely out of balance, possessing an impenetrable wall of oak, vanilla, tobacco, and earth that masks any hint of fruit. I consider decanting this wine to be absolutely essential, so my tasting notes describe my experience after allowing one hour of aeration.

Decanting the wine puts the intense garnet red color on full display. There is more sediment than expected for a wine possessing less than a decade of age. The Basarin Vigna Gianmaté 2015 tastes predominantly of cherry, bolstered with an assertive background of oak and vanilla and an earthy finish. The wine is well structured with ample acidity to balance the tannins.

Château Pontet-Canet (2006)

Bordeaux wine is often considered a symbol of elegance, sophistication, and complexity. Nestled in southwest France, Bordeaux is renowned for producing some of the world's most coveted wines. The complexity of Bordeaux lies in the intricate balance of flavors, aromas, and textures unique to each wine. The region's rich history, diverse geography, and meticulous winemaking techniques all contribute to the wine's complex and multifaceted nature. Bottles of wine from this region have a reputation for eye-watering prices; however, many Bordeaux wines offer excellent value, offering 90% of a coveted first growth at a fraction of the cost. Château Pontet-Canet has long been one of my favorites, offering fantastic wine with quality that remains consistent between vintages. The wines produced by Château Pontet-Canet are renowned for their robust flavors, complexity, and exceptional aging potential.

Château Pontet-Canet is a fifth-growth classified estate that has a history that dates back to the early 18th century. Located in the Pauillac appellation in the Bordeaux region of France, the Tesseron family has owned the estate since 1975. I've been particularly captivated by the story of Alfred Tesseron, the current owner who took charge of the estate in 1994. His passion for organic and biodynamic farming makes him a visionary leader in the conservative region of Bordeaux.

The 2006 vintage of Pontet-Canet is somewhat undervalued, given that the year presented a challenging growing season. The wine still tastes young despite having a bit of age. It remains concentrated, with assertive notes of blackberry, plum, and currant. The present but integrated tannins give way to a long, satisfying finish. It doesn't match the 2010 vintage that Robert Parker scored a perfect 100 points, but it isn't too far behind at half the cost.

Archetype El Vergel Estates Gesha 240 Horas - Competition Series

The El Vergel Estates Gesha 240 Horas is a coffee that comes with a story. This year, Archetype Coffee competed in two competitions. Archetype’s owner, Isaiah Sheese, won the United States Barista Championship. Jesus Iniquez, one of Archetype’s most skilled baristas, placed fourth in the United States Brewers Cup Championship. This is the coffee Jesus selected to compete with. Like nearly all top quality competition coffee, the El Vergel Gesha was available in limited quantity, with only 80 227 gram bags available.

The coffee really shines as a pour over, with notes of black cherry and tropical fruit giving way to floral undertones. Rose hips and moderately dark chocolate dominate the long, evolving finish while the tropical fruit lingers on the palate. This coffee is complex, and it’s obvious why it was selected for competition.

Despite being brewed using a filter in competition, the coffee also shows very well as espresso. A fairly flat seven bar profile with short pre-infusion brings out the vibrance and sweetness in the tropical fruit while providing a balanced acidity.

Espresso

Bean Weight 18 g
Brew Time 26 sec.
Pressure 7 bar
Water Temperature 91°C
Yield 40 g

Filter (Origami)

Bean Weight 18 g
Brew Time 2:10
Water Temperature 96°C
Yield 280 g

Interpretability in Machine Learning

Since OpenAI released its large language model (LLM) chatbot, ChatGPT, machine learning, and artificial intelligence have entered mainstream discourse. The reaction has been a mix of skepticism, trepidation, and panic as the public comes to terms with how this technology will shape our future. Many fail to realize that machine learning already shapes the present, and many developers have been grappling with introducing this technology into products and services for years. Machine learning models are used to make increasingly important decisions – from aiding physicians in diagnosing serious health issues to making financial decisions for customers.

How it Works

I strongly dislike the term "artificial intelligence" because what the phrase describes is a mirage. There is no complex thought process at work – the model doesn't even understand the information it is processing. In a nutshell, OpenAI's model powering ChatGPT calculates the statistically most probable next word given the immediately surrounding context based on the enormous amount of information developers used to train the model.

A Model?

Let's say we compiled an accurate dataset containing the time it takes for an object to fall from specific heights:

Height Time
100 m 4.51 sec
200 m 6.39 sec
300 m 7.82 sec
400 m 9.03 sec
500 m 10.10 sec

What if we need to determine the time it takes for that object to fall from a distance we don't have data for? We build a model representing our data and either interpolate or extrapolate to find the answer:

{\displaystyle \ t=\ {\sqrt {\frac

Models for more complex calculations are often created with neural networks, mathematical systems that learn skills by analyzing vast amounts of data. A vast collection of nodes evaluate a specific function and pass the result to the next node. Simple neural networks can be expressed as mathematical functions, but as the number of variables and nodes increase, the model can become opaque to human comprehension.

The Interpretability Problem

Unfortunately, opening many complex models and providing a precise mathematical explanation for the decision is impossible. In other words, models often lack human interpretability and accountability. We often can't say, mathematically speaking, exactly how the network makes the distinction it does; we only know that its decisions align with those of a human. It doesn't require a keen imagination to see how this presents a problem in regulated, high-stakes decision-making.

Let's say John visits a lender and applies for a $37,000 small business loan. The lender needs to determine the probability that John will default on the loan, so they feed John's information into an algorithm, which computes a low score causing a denial. By law, the lender must provide John with a statement of the specific reasons for the denial. In this scenario, what do we tell John? Today, we can reverse engineer the model and provide a detailed answer, but even simple models of tomorrow will quickly test the limits of human understanding as computing resources become more powerful and less expensive. So how do we design accountable, transparent systems in the face of exponentially growing complexity?

Solutions?

Proponents of interpretable models suggest limiting the number of variables used in a model. The problem with this approach becomes apparent after considering how neural networks weigh variables. Models multiply results by coefficients that determine the relative importance of each variable or calculation before passing them to the next node. These coefficients and variables are often between 20 and 50 decimal places long, containing positive and negative numbers. While understanding the data underpinning a decision is essential, more is needed to truly elucidate a clear explanation. We can partially solve this problem by building tooling to abstract implementation details and provide a more intelligible overview of the model; however, this still only provides an approximation of the decision-making process.

Other thought leaders in machine learning argue that the most viable long-term solutions may not involve futile attempts to explain the model but should instead focus on auditing and regulating performance. Do large volumes of test data reveal statistical trends of bias? Does analyzing the training data show any gaps or irregularities that could result in harm? Unfortunately, this does not solve the issue in my hypothetical scenario above. I can't conclusively prove that my current decision was correct by pointing to past performance.

Technology is simply moving too rapidly to rely on regulations, which are, at best, a lagging remedy. We must pre-emptively work to build explainability into our models, but doing this in an understandable and actionable way will require rethinking our current AI architectures. We need forward-looking solutions that address bias at every stage of the development lifecycle with strong internal governance. Existing systems should undergo regular audits to ensure small changes haven't caused disparate impacts.

I can't help but feel very lucky to live in this transformative sliver of time, from the birth of the personal computer to the beginning of the internet age and the machine learning revolution. Today's developers and system architects have a massive responsibility to consider the impact of the technology they create. The future adoption of AI heavily depends on the trust we build in our systems today.

Data-Driven Espresso

I have a well-earned reputation as a "coffee snob" at work. Co-workers snicker as I don my jacket, preparing to walk eight blocks in subzero temperatures just for a better cup of coffee. After earning this reputation, I'm often asked about coffee, particularly espresso. When asked about options for making espresso at home, I usually respond with another question—do you want a new hobby?

Lately, I've tunneled deeply into the bottomless rabbit hole of coffee. As is my nature, I've taken an intensely data-driven approach to experimenting with flavor and maintaining consistency. Tightly controlling variables and changing one at a time is the only meaningful way to judge the outcome of a change. But, of course, this requires extreme precision, which is where equipment and technique come into play.

Most espresso machines, even those at the high end, fail to provide feedback about the brewing process. Defects manifest themselves clearly through tasting, but the ultimate cause is often unclear. This lack of transparency is frustrating for a person with a deeply analytical personality. Luckily, data-driven coffee nerds now have options.

A monumentally modest company named Decent has become an industry leader in the art of brewing espresso with extreme precision only afforded by an automated, software-driven design. Every variable can be controlled and dissected, from pressure to flow, weight, temperature, and time.

The DE-1 after brewing the second best espresso I’ve ever had.

Decent Espresso Machine

The Decent espresso machine is a game-changer. The machine offers an unprecedented level of control and precision that is unmatched by other espresso makers. This level of precision allows for a level of consistency that is unparalleled. There is no better option for technophile coffee lovers looking to take their espresso brewing game to the next level.

The machine's software allows for an incredible level of customization. Users can create and save their own recipes and profiles, tailoring the brewing process to their exact preferences. The software also provides real-time feedback, making it easy to make adjustments throughout the extraction process.

One of the most impressive features of the Decent machine is its ability to track and display data about each shot. For example, below is a ten-second pre-infusion followed by a standard nine-bar pressure profile compared with a pre-infusion followed by a long "bloom" phase that reduces astringency and bitterness.

The traditional flat nine-bar pressure profile has become the industry standard not because it offers the best extraction, but because it is a good compromise between quality and time—an essential consideration for a busy cafe. Despite decades of incremental improvement, applying modern technology to a century-old brewing process demonstrates that no system, no matter how refined, can transcend the benefits of human creativity mixed with a pinch of technology.

Increase Efficiency with Platform Cache

Platform Cache is a memory layer that stores your application's session and environment data for later access. Applications run faster because they store reusable data instead of retrieving it whenever needed. Note that Platform Cache is visible and mutable by default and should never be used as a database replacement. Developers should use cache only for static data that is either frequently needed or computationally expensive to acquire. Let's explore the use of cache in a simple Apex class.

In the example above, we acquire objects in the environment to create a schema. The Schema.getGlobalDescribe() function returns a map of all sObject names (keys) to sObject tokens (values) for the standard and custom objects defined in the environment in which we're executing the code. Unfortunately, we're not caching the data, which makes this an expensive process. This code consumes 1,307 ms of CPU time with a heap size of 80,000 bytes. Let's improve this code by using a cache partition.

This code performs the same operation but caches the result. In line 5, we're instantiating a cache partition. We're running the same function to build our schema map; however, line 15 instructs the program to place the results in the cache for later use. Our processing requirements diminished significantly, consuming only 20 ms of CPU time.

Despite the breathtaking advances in processing power, developers should always ensure they are writing efficient code that possesses a minimal processing footprint and scales with increased volume.

Further Reading

Salesforce Developer Guide - Platform Cache

The Paradox of Efficiency

It started earlier than I thought. In January, I wrote an article making predictions for 2023. One of my subheadings was “A Year of Doing More with Less,” where I argued that companies need to look for focused, strategic areas of investment to increase efficiency. We’re now seeing significant layoffs in the technology sector. Year to date, Google has laid off 12,000 workers, Microsoft 10,000 employees, and Salesforce 8,000. Unfortunately, these companies are taking a short-term view of efficiency that will damage their long-term success. Instead of finding areas where technologies can work together to provide multiplicative value, these CEOs are chasing short-term gains over long-term efficiency. I would argue that this quest for efficiency may decrease real efficiency.

Aggressive Headcount Reduction Limits Cross-Selling

Customer acquisition has its limits. Eventually, continued growth requires selling additional services to existing customers. Gathering revenue figures from sales is a trivial task, but it is challenging to pinpoint how much customer satisfaction with the service of existing products plays a role. The difficulty attributing hard figures to servicing makes these areas prime targets for headcount reduction. Why would a customer consider making another purchase when the business cannot provide support for products you’ve already bought? Platform lock-in has limits, and customers will eventually move to a competitor. Headcount reduction decisions are often made with the flawed assumption that all other variables will remain constant—productivity gains elsewhere will offset the smaller workforce. But this is seldom true unless the reduction is minimal.

The Inefficient Process of Gaining Efficiency

A consequence of chasing efficiency is its opportunity cost—its drain of resources that would have promoted real efficiency in the long term. Isn’t it curious that many companies most aggressively pursuing efficiency at all costs are often stuck making incremental improvements to existing technology? Why aren’t they most often responsible for radical, groundbreaking innovations? Why do comparatively small startups with different organizational values often make these genuine innovations? Companies with aggressive management directives to slash costs and reduce overhead often fail to invest in areas that produce innovation. In the long term, this lack of investment profoundly impacts company culture, often precipitating an exodus of forward-looking employees. Our industrial society values rapid and predictable returns on investment and neglects the necessarily inefficient process of innovation—shareholders see it as wasteful. This is the crux of the paradox; the quest for “friction-free” processes may be slowing the discovery of more fundamental changes that would have a much more profound impact on efficiency.

Our society views imagination with a strong sense of ambivalence. Humans are naturally short-term thinkers, and it takes an abundance of thoughtfulness to understand how a series of decisions made today will make a larger impact tomorrow.