I consider myself a very lucky person. I’m not even 40 years old, yet I’ve already lived through 3 major technological revolutions in my lifetime.
I remember when the internet came to my life. I was 8 year old, and I had the chance to spend 30 minutes in what nowadays you would call an internet cafe (it was just a call shop with 2 machines on a dial up connection). An 8 year old with 30 minutes on a 56 kbps connection trying to figure out what a search engine was, but it definitely left a mark on me.
Then mobile came up. I was a late teenager by then, and you could see how a lot of concepts I was used to by being an internet teenager were now pervading into the general public. Everyone had a computer in their back pocket.
Now it’s AI. Maybe this was the expected next step? If I think about it, we had Skynet since the 80s (The Matrix was my introduction to the concept), and the idea of thoughtful machines was always there. It was just a matter of time for technology to cross a fuzzy barrier that marketeers could sell as “thought”.
What I’m most astonished about is how fast we incorporated it into our lives, and how little thought we are giving to the “thoughts” it’s giving us.
In this day and age, I choose to invest some of my personal development time to understand how to think twice about what’s in front of me through critical thinking.
What is critical thinking?
When I talk about critical thinking, I mean the process of deep analysis around everything we communicate. It includes not only the message structure, it’s logical validity, and it’s truthfulness — it also includes the choice of wording and rhetorical devices such as analogies and metaphors to persuade the receiver to think and react in a certain way.
Why is it important to develop my critical thinking?
Most engineers can benefit from analyzing information they receive with a skeptical eye. They could develop a personal point of view on all aspects, contrasting what they receive with your values and principles.
In my day-to-day work, I consume most of my information from internet sources that I don’t know much about. A lot of this information ends up being heavily opinionated, if I’m lucky enough to find content that’s factual. Critical thinking gives me a good position to question all this information, filtering only the pieces I consider valuable to my work and growth.
Do I think AI thinks?
Some people confuse eloquence with knowledge, leading them to believe that confident speech and using hard words must equal knowledge on a subject and thus, truthfulness. This assumption can be abused to sell an argument as sound even if they lack of validity or are based on false assumptions.
I think AI tools have a tendency to fall in this bucket. The responses I get are usually loaded with a bland, fake praising tone, with long responses that seem to focus more on flair than on conciseness.
I believe there is a problem on how companies building AI publicize their products as “reasoning” solutions, where we wouldn’t consider any human reasoning acceptable if they couldn’t guarantee accuracy and reliability in their reasoning. Somehow this companies and their systems seem to convince us otherwise.
My understanding is that we’re still far from having systems that “really think”. Improvement has been done — like the introduction of inference time computing, see this video to learn more about it → https://www.youtube.com/watch?v=CB7NNsI27ks — but it seems to be more on the line of playing around with parameters on the simulation of thinking over actual thinking.
Will AI ever get to “think” as we do? Will it do something even better? That sounds like a great topic for modern philosophy to dig in. For the time being, it’s making me more and more interested in great human thinkers (”great thinkers” by The School Of Life became a daily read of mine), and somehow more and more skeptical with AI responses.
My relationship with AI
I noticed that as I spend more time using AI, I started building trust on it. More fascinating, I seem to trust what ChatGPT and Claude — main big players — offer vs other models I don’t know of that much. But Why?
Could it be that the more we use these tools, the more trust we develop and the more our skepticism towards their content decreases? Could it be that all that marketing around these big players in the game “reasoning” is paying off to their companies?
It’s because of all this that I decided to stand on the human side of the discussion, and do what humans do best: question everything. I still use AI as a source of guidance, but I am very sceptic about anything that’s about a field I don’t know much about. Whenever I use it on a technical matter, I corroborate its accuracy with some curated website or book around the topic.
Final thoughts
I welcome AI in my life. It augments my work in many ways, while making me grow as a critical thinker and connecting me back to deep thinking. I will still fallback to corroborated sources of knowledge whenever possible. I really hope AI will not undermine our books, universities, and knowledge institutions. That would be a real loss for humanity in this new world.
