Feeding Our Foundations

Foundational Models

In the world of AI, a foundation model is a machine learning or deep learning model that is trained on broad data, allowing it to be applied across a wide range of use cases. These models have significantly impacted AI and are the driving force behind prominent generative AI applications like ChatGPT. They undergo pre-training on vast, unlabelled data, learning valuable features.  

There’s a lot of focus currently on the quality of the data being used to train these models, along of course with the legality of using copyrighted data. A phrase we hear a lot is “garbage in, garbage out”. This is particularly important when we extrapolate the impact these models can have on decision making, framing the problems we’re looking to solve. Even more so when we consider that 74% of us state when surveyed that we trust the output of many AI co-pilots without sense checking…

It also highlights an issue we rarely talk about for nervousness of being too controversial. The same can be applied to all humans. When it comes to the information we consume, and the opinions we then adopt and develop to make our own. Our own foundational mental models. The same warning applies: garbage in, garbage out. 

 

Skyscrapers, both buildings and aspirations, need solid foundations

What comes out of your mouth is determined by what goes into your mind.

How’s your information diet looking?

When you stop to consider what information you’re consuming, are you opting for the inputs that will stretch and challenge you, or what will give you the dopamine hit of confirmation bias? Are you consuming kale chips, or chips chips?

It’s no secret that we’re moving at a faster and faster pace. This means we’re more likely to use convergent thinking and rely on our heuristics  [read: immediately jump to a solution/conclusion and rely on our mental shortcuts]. All well and good, except many of the more complex problems we solve and most worthwhile relationship interactions we have benefit from divergent thinking [read: exploring lots of possible solutions and ideas in a creative manner]. 

We’re all over stretched and we’re all in echo chambers, so relying on heuristics to make quick decisions is tempting and understandable. Yet it’s worth considering that with all the debate around AI we expect decision making that’s accurate 100% of the time. With humans the average process error rate is 7-10%. On a good day. 

So what can we do to improve performance? Here at Transform Perform we try to manage our information diet. We only read the limited news we need to make decisions for the business or to bring value for our clients, and even then from divergent sources so as not to be overly swayed. 

But we have to confess that recently many hours have been lost to the addictive vortex of reels on social media. It’s why we hesitated to go onto TikTok (yet we finally have), and why we have a “no X nee Twitter” policy. We’re all human, hence we’re all super susceptible to algos and the brain candy they feed us over the slightly more challenging and far more rewarding materials we probably ought to be feeding our own mental models on. Sometimes we all opt for chips chips.

How to up our game?

The good news is as with all habits, it’s easy to counter with a few tweaks. It’s always best to start small. 

  • As with our food when we’re reviewing our diets, we might want to lean up and cut the “sugar” from our information diet. Review your sources, assess whether they’re helpful or harmful, and indulge in moderation for those you decide to keep. Just like not keeping junk food in the cupboards, try making it harder to find your most addictive and mind numbing apps, so your mind has a chance to resist the impulse when it comes.
  • If you’re already using ChatGPT, take a few more seconds once you’ve got your initial answer to ask it for the counter argument. Oh, and always say please and thank you, remember it’s learning from you. 
  • If you always tend to read your news from one source, experiment with engaging with “the other side” for a week to start. See if anything surprises you. Control your desire to be defensive, remember it’s just an experiment. 
  • If you’re ready to level up, try this divergent thinking exercise next time you have a problem to solve, or a debate to be had. 

 

The machines are a huge benefit to our day to day. They can process IQ related work and repetitive processes far better than we can. But when it comes to being human and EQ that’s for us to lead on. Feed your own foundational models with gold, not garbage.

You should speak to us.  

Share Post:

Subscribe to our newsletter

* indicates required
Choose your inspiration (one or both)

Subscribe to our newsletter

* indicates required
Choose your inspiration (one or both)