Preface
What does artificial intelligence actually do? There’s so much buzz around it, but how do I actually use it? Plus, what are identities of me as an AI-user that I want to embrace? Here’s to taking a stab at working out my thoughts from attending meetings, a roundtable discussion, and preliminary research.
Introduction
In the age of artificial intelligence (AI) where well-written briefs, memos, and even news articles can be generated in a matter of minutes, how does current trends in AI development affect the everyday workspace? From its early entry into the public consciousness as a “chatbot” tool, AI is now increasingly being adopted on a broad scale for enterprise purposes: 45% of event organizers now leverage AI for operations and personalization, with 55% of adopters being small businesses using AI to scale operations efficiently.1 Already, there are concrete signs and actions to intentionally scale SaaS AI adoption to the general populace. A telling example of this democratization of the usage of advanced AI tools to the everyday consumer is that the mobile network I use, Optus, partnered with Perplexity AI by offering its clients — us — access to Perplexity Pro for 12 months for free.2 While the entry of AI signaling the decline of the knowledge-worker economy for the past 20 years,3 the Head of Communications at an AI startup with a valuation of US $14B4 sees the ‘winners’ of this new age as people who are great at constantly learning knowledge from zero-to-one.
With a bit of human tweaking, AI can act as an incredibly efficient tool to expedite outcomes for projects. Using the AI tool Perplexity Lab, Anonymous A reduced the process it took to execute a communications crisis project from what would’ve taken weeks into mere hours, with an hour of human tweaking to complete. Professionals in my workplace have already picked up on using tools like Claude for email drafting and Perplexity for research.5 High-capacity tasks like strategy development or creative design are moderately automatable, as demonstrated by how, with a well-structured prompt, Perplexity Labs was able to autonomously plan and execute a project.
I agree that a direct beneficial use of AI is clearing time for the user to allocate more chunks of their time to engage in deep work and rest where it would’ve been needed to engage in menial draining tasks in the past. Manual workflows – what I term as low-cognitive-capacity tasks such as spreadsheet tracking and manual data entry – are already offshored towards AI tools. Low-capacity tasks such as data entry or email drafting are highly automatable, exemplified by how Fyxer’s email automation saves a solopreneur 5+ hours weekly.6 However, the rebound effect—where efficiency gains lead to increased consumption – suggests making a tool more efficient did not have the intended effect of saving time and energy; rather more energy is used because of the very reason that the tool became more efficient. Use of AI – whether that leads to increasing burnout due to more on-loading of tasks, or to use it strategically with differentiating task automation between low-cognitive-capacity and high-cognitive capacity, will be determined by the user behind the AI.
Following along with the train of thought of more selective, discriminatory usage of AI, the proliferation of AI comes with a considerable environmental cost. Consensus is shifting from the early days of AI, where it was envisaged as a ‘free good,’ to now recognizing its environmental costs that are not reflected in market prices. Current pricing models often prioritize accessibility over sustainability, offering AI tools at low subscription costs – $20 USD/month for Perplexity Pro and ChatGPT – despite their disproportionate computational capabilities. However, impending resource scarcity – particularly in energy and water – may necessitate a transition to usage-based pricing. This shift could position AI as a common-pool resource, requiring institutional oversight to prevent overexploitation. Workplaces may adopt monitoring systems to track AI utilization, aligning resource consumption with operational efficiency.
Emerging as a key competency is prompt engineering – the practice of framing questions to extract targeted insights – to optimally engage AI’s reasoning capabilities to tackle specific problems. A professional at a task-force on AI at an investment bank7 mentioned that for the average worker, the most useful way is to learn how to engineer prompts to generate a response that specifically focuses on answering your question at its core. In fact, the emergence of learning to prompt with AI intelligently with differentiated structured prompting frameworks is exemplified with breakdowns of users categorizing their prompting, with one such example of a user analyzing the evolution of prompting into four categories – Simple English, Structured Prompts, “JSON prompts,” and “XML prompts.”8 A lot of AI responses are general because the prompting is general. In the process of the user thinking on how to best communicate a question, the user thinks through how to best word the question itself for their desired purpose, and incidentally learns more about what they know and don’t know. For the working professional, a focus on smart AI usage towards prompt engineering, with a focus on diction and structure to generate a targeted response, will have the potential to close the adoption gap between chatbot usage versus generating enterprise-level value for professional problem-solving in cases.
One of the strengths of AI lies in teaching. It can tutor people on complex topics, and engage as an essay editor. If AI is good at teaching, it will empower people who are good at learning. “The future is going to be people that have not just one PhD, but people who have the knowledge of twenty PhDs.“ It will empower people who decide to act upon an idea, by teaching them the mindset along with the skillset of how to bring it to life.
As much as AI could exponentially impact workforce productivity by acting as a tool for agentic learning, overreliance on AI tools simultaneously threatens to disrupt traditional cognitive workflows. A preliminary MIT brain scan study correlates excessive AI reliance with reduced reasoning skills,9 in which repeated usage of ChatGPT decreased in the brain’s ability in conducting the critical thinking which professionals value so highly to process and analyse information on a broad scale. According to the study, users In younger workers who substitute tools for critical thinking, overreliance on prompt engineering AI chatbots risks the rise of a massive technical gap amid generational changes: that young professionals will miss the fundamental experience of honing their intuition and problem-solving skills in their industries by hand, because of ‘off-shoring’ problems towards AI chatbots in the name of efficiency.
For high-capacity tasks like research, tools like Perplexity Pro can be used to streamline the process but risk homogenizing outputs. In my workspace, more than a couple of speaker submission applications now lack the personality of individual creativity and personality, with an undeniable ‘beige’ tone indicative of AI-writing.10 Increasing “AI-speak” in outputs across all mediums – from emails to project submissions – risks the loss of individuality. In the field of communications, as AI-speak becomes increasingly popular, a phenomenon akin to an emotional language barrier occurs where overly structured outputs that appear from everyday communication like emails to project submissions in the workspace risk alienating first-time stakeholders.
AI’s future hinges on balancing efficiency with human agency. In the foreseeable future in the workforce, soft skills ranging from face-to-face advocacy in a team setting, pitching a project with clarity and individuality, and writing with intention will be much more valued than expressing a corporate tone. AI-generated analytics tools like Freckle improve efficiency but struggle to replicate human intuition in attendee engagement strategies. In workplaces, AI-generated meeting minutes may inadvertently violate non-disclosure agreements or miss contextual subtleties, and even misattribute claims to meeting attendees, raising ethical concerns.
Tools like Perplexity Labs and Claude may be able to streamline workflows, but success requires intentional adoption with an eye on the cost of convenience on the vitality of thought. “The feeling of being alive comes from being fully engaged in the right task, not free from all tasks.”11 Potential solutions include cross-generational mentorship programs, pairing AI-literate juniors with experienced strategists to balance automation and judgment. Ethical guidelines like cap-and-trade models for resource-intensive AI usage12, such as limiting the number of prompts allowed per team, can help mitigate environmental costs and encourage smart use of AI. AI has reoriented jobs toward human strengths like empathy and complex problem-solving, a trend poised to accelerate as frontier models prioritize reasoning and multimodal integration, and the path forward lies in embracing AI as a co-pilot, not a replacement, ensuring generational collaboration and sustainable growth.
References for future follow-up:
Parry, Adam. “Event Industry News Launches Groundbreaking AI Report 2025, in Partnership With EventMobi.” Event Industry News, February 19, 2025. https://www.eventindustrynews.com/guides/event-industry-news-launches-groundbreaking-ai-report-2025-in-partnership-with-eventmobi.
Optus. “What is Perplexity?” Accessed July 9, 2025. https://www.optus.com.au/support/answer?id=20333#:~:text=To%20redeem%20follow%20these%20steps,ai%2Fjoin%2Fp%2Fredeem.
Ernst, Ekkehardt, Rossana Merola, and Daniel Samaan. “Economics of Artificial Intelligence: Implications for the Future of Work.” IZA Journal of Labor Policy 9, no. 4 (August 14, 2019). https://doi.org/10.2478/izajolp-2019-0004.
Interviewee A. Personal Interview. June 27, 2025.
Interviewee B. Personal Interview. June 29, 2025.
Kang, Jean. “I Quit Big Tech to Start My Own Business. Here Are 5 AI Tools That Save Me up to 15 Hours a Week.” Business Insider, June 13, 2025. https://www.businessinsider.com/ai-tools-save-15-hours-week-big-tech-founder-2025-6.
Interviewee C. Personal Interview. May 14, 2025.
Merrick, Oliver. “I’ve sent 36k+ messages to Chat-GPT…” LinkedIn, June 18, 2025, https://www.linkedin.com/posts/oliver-merrick_ive-sent-36k-messages-to-chat-gpt-so-activity-7340526058129002496-P_pH?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEUt2XABLR7q0waBKqWarg3pcA0MExh4Qxg.
Chow, Andrew. “ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study.” TIME, June 23, 2025. https://time.com/7295195/ai-chatgpt-google-learning-school/.
Interviewee D. Personal Interview. June 29, 2025.
James Clear. “3-2-1:On thinking like a scientist, how to stand out from the crowd, and what wisdom looks like” May 15, 2025. https://jamesclear.com/3-2-1/may-15-2025.
Perplexity offers free AI tools to students worldwide in partnership with SheerID | VentureBeat
Dwyer noted that unlike traditional software, AI tools have direct computational costs for each query. “Every query has a direct cost in terms of compute,” Dwyer said.
really insightful, never thought of these even though im a CS student. thanks for sharing.