Tiwa Adejuyigbe

I mostly write about technology, society and product-building. I'm currently writing a thesis on Nigeria's cryptocurrency policy.

Basic Income, Limitless Outcomes

I credit a lot of my thinking on futuristic economic systems to progressive economists such as Rutger Bregman, Kate Raworth and Guy Standing. Their work has challenged conventional economic wisdom and opened possibilities for reimagining how our societies could function in the twenty-first century.

Alongside the 'four-day workweek', the Universal Basic Income (UBI) was one of the first 'progressive' economic concepts I came across in my research. The UBI concept is rather straightforward: a government programme in which every adult citizen receives a set amount of money regularly, no strings attached. I know, it sounds too good to be true.

Despite a few successful pilot schemes with small-to-medium sized populations, the million-dollar question remains: how to fund such a programme on a universal, wide scale? I suppose that's why it's often relegated to the realm of utopian ideals. Most proposed solutions involve tax adjustments or sovereign money creation, but neither seems entirely sufficient. The UBI appears difficult to implement, and governments understandably prefer predictability—or where possible, incremental developments—given their aim of political re-election.

Why a UBI in the first place? Well, why not? The most fascinating thing to me about the UBI has been its bipartisan appeal. The goals of a basic income system are primarily to alleviate poverty and to replace other need-based social programmes that potentially require greater bureaucratic involvement, thereby fulfilling both progressive and conservative ideological preferences. Despite concerns regarding its implementation, the possibility of receiving a monthly income simply for existing is quite literally the dream. Who wouldn't want that?

My conception of a basic income isn't inherently universal, but an age-specific basic income scheme. Allow me to provide context.

I've had many conversations recently where I referred to the current Gen Zs—myself included—as 'baby adults'. By that, I mean we're at the very infancy of the adult lifespan, and understandably have little figured out. A 21-year-old and 27-year-old should not be heralded with the same expectations despite being only six years apart. Ergo, I refuse to believe 21- and 41-year-olds are equally regarded as 'adults' for reasons beyond legal identification.

Yet, the existing systems are designed for a seamless transition—from basic education to higher education and subsequently to the workforce—which is proving increasingly unrealistic and unsustainable. Governments fail to factor in the structural inequalities, diverse preferences, and unintended consequences associated with the existing systems which may create difficulties adhering to the normalised pipeline.

Moreover, we possibly underestimate the integrality of 'productivity' to one's personhood in the current systems. The education-to-workforce pipeline is built upon the principle of 'persistent productivity' whereby one must be permanently doing something widely perceived as 'useful'. In most cases, this is either education/upskilling—with the aim of monetising the skills or improving one's employment opportunities—or work itself. There's a third option whereby a person desires either a) the opportunity to upskill or b) work itself, but lacks the resources or pathways to do so.

The issue isn't the notion that one must work. Work predates modern civilisation and is likely innate to our 'humanness'. However, the issue remains the expected adherence to a seamless transition in a world with which such a system is less compatible. In other words, there is little to no time to simply 'vibe'. The outcome? A generation experiencing burnout either by work, or the chronic guilt from not being in work.

The progressive economics concepts aren't radical, but simply acknowledge the necessity of updating certain aspects of modern socioeconomic systems to be more compatible, desirable, and ultimately sustainable.

As I stated earlier, my conception of a basic income isn't necessarily for all adults, but should exist specifically for adults 21–25. Such a proposal could exist as an opt-in social programme. The reasoning is multifaceted and, I believe, compelling:

  • Firstly, it's time-bound and whilst unconditional, not permanent. This isn't an endless commitment from the state—it's a targeted intervention during a critical life stage.
  • Secondly, it's highly feasible, of course depending on the population composition and existing tax structures. We're not talking about funding the entire adult population indefinitely, thus this is a far more manageable proposition aligning with existing tax structures.
  • Third, as an opt-in scheme, it's unconditional, but not required. Freedom of choice remains central to the concept.
  • Fourth, an age-based basic income could significantly improve mental health outcomes: reduced acute depression in the short-term and potentially reduced burnout in the long-term. It's hard to say, but I wouldn't be surprised to learn if this causes a prolonged decline in suicide rates given its prevalence amongst young people.
  • Fifth, adherence is easily monitored as many governments now implement National Insurance identification systems. The administrative infrastructure largely exists already across the board.
  • Sixth, and most of all, it could forecast the economy's potential through dynamic scoring and enable more efficient long-term economic policy. This isn't just about supporting young adults, but about potentially gathering valuable data which could transform how we understand economic behaviour and plan for the future.

Above all, an age-based basic income scheme could provide a more measured transition from education-to-work which better aligns with the existing state of things, given that our current systems didn't account for advanced factor substitution, exponentially increased inflation and prolonged unemployment rates.

Whilst the UBI may not be able to be utilised for all adults—and certainly not across all countries—it's highly possible a basic income scheme could be implemented across multiple nations given its bipartisan appeal. I'm not necessarily suggesting this is the economic panacea that will solve all our problems, but it could be a significant step in the right direction. Whilst unable to wholly eradicate poverty, an age-based basic income nonetheless provides a useful safety net which could significantly lessen poverty in the long-run.

The possibility of such a scheme truly has limitless outcomes, many of which I can't explore for the sake of brevity. I welcome other researchers—particularly those with a background or interest in econometric models—to build upon and potentially create such a framework. After all, the true test of any idea isn't in its conception, but in the myriad ways it might transform our lived realities.

Yamato

I drafted a post ~6 months ago titled "Good Badvice" on applying discernment to startup advice — whether on product- or company-building. I decided against publishing at the time.

For context: I've consumed my fair share of startup advice from various thought leaders ranging from first-time founders and indie hackers to serial entrepreneurs and VCs. While a lot of startup advice is technically valid, it often lacks nuance.

Close to a year ago, I headed to Lagos in an attempt to get users for my startup at the time. Banga was a relatively simple restaurant solution with straightforward functionality that would've been easy to onboard users (had I landed any). I eventually pivoted to an adjacent market, but I'm glad I didn't spend months building in a vacuum for an unresponsive market. I.e. I 'shipped fast' and 'failed quickly'.

A year later, I'm working on something new within a different sector entirely and requiring different mechanics. With this, shipping fast with the real possibility of failing quickly is likely not an option. The 'ship fast' principle which birthed the MVP may work with enterprise SaaS tools or vertical APIs but not with all markets and certainly not all kinds of users. However, much of the available startup advice fails to recognise or promote such nuanced perspectives. In an industry with such high risk and uncertainty, perhaps more thought leaders need to encourage patience and timely strategy rather than speed at all costs.

An analogy I've recently found helpful is the comparison between the Titanic and the Yamato.

The Titanic's demise unfolded through a tragic confluence of design compromises, operational decisions and circumstance. The supposedly revolutionary vessel was, in reality, built with established techniques which neglected structural resilience. The catastrophic sinking transformed what should have been a manageable incident into an engineering failure that claimed over 1,500 lives after a single, relatively minor collision.

The Yamato, by contrast, represented an almost supernatural resilience against overwhelming force. At the time of its completion in 1941, it was the heaviest and most powerfully armed battleship ever built, with unprecedented armour protection. It was destroyed only after a massive coordinated attack specifically designed to overcome its exceptional durability. More specifically, the American force committed nearly 400 aircraft specifically to destroy this single battleship. Yet, even after absorbing 10 torpedo strikes and 6 bomb hits, the colossal battleship maintained operational capability, with functioning weapon systems and propulsion. It took an additional 5 torpedoes — deliberately concentrated on one side to cause asymmetric flooding — before the mighty vessel finally surrendered to physics rather than enemy action—capsizing only after withstanding damage sufficient to sink an entire fleet of conventional ships.

Both vessels sank for very different reasons — one due to myopia and premature launching, and the other due to extraneous circumstances despite operational fortitude.

We're taking our time and hoping to get things right from Day 1. If things don't work out for any reason, I'd rather it's due to 400 aircraft and 15 torpedoes than a single iceberg.

Autistica

I'll be running the Hyde Park 10K this Saturday to raise money for Autistica, the UK's leading autism research and campaigning charity.

More than one in 100 people are on the autism spectrum and there are around 700,000 autistic adults and children in the UK. At the same time, the autism waiting list in England surpassed 200,000 people for the first time in September 2024.

These days, formal diagnoses aren't entirely required for someone to be deemed autistic. The increased awareness surrounding neurodivergence — and autism specifically — has enabled people to identify as autistic regardless of a diagnosis. Nonetheless, formal diagnoses are often required for reasonable adjustments in the workplace or higher education settings. This is especially crucial for adults considered to be "high-functioning" autistic individuals.

As a matter of fact, a diagnosis is simply the first step in what we understand to be a lifelong condition. For many, the diagnosis does not merely label, but instead empowers individuals to recognise and celebrate their neurological differences and special abilities.

I began training for the upcoming 10K close to 6 weeks ago. At the time, I knew I wanted to support a charity focused on autism — ideally in adults — but hadn't conclusively decided which one.

I'm fundraising for Autistica specifically for two reasons. Firstly, I believe addressing autism at the systemic level through inclusive research combined with advocacy is the best approach. I also believe this approach is the best way to achieve their mission of enabling autistic adults to live healthier & happier lives.

On research, their recent projects include: making public transport more accessible for neurodivergent people; using machine learning to identify patterns in autistic people's deaths by suicide; assessing special measures for autistic people in the criminal justice system; developing the neuroinclusion index for employers.

On advocacy, Autistica was recently represented at the Zero Project Conference in Vienna, and at the House of Lords regarding the UK's Autism Act. For context: both events took place within the last week.

I'm proud to support a charity dedicated to truly understanding autism from a clinical standpoint and addressing the needs of neurodivergent adults in society.

April is both Autism Awareness Month and Neurodiversity Acceptance Month, hence this fundraising effort feels particularly important.

I'd appreciate any donations in the run up to the 10K. At the time of writing, we're 60% to the £2,000 goal, but I have no doubt in my mind it'll be achieved quite soon.

Thank you in advance.

EDIT: We reached the £2,000 goal on March 17, 2025. I'm incredibly thankful to everyone who donated or supported in some way!

Be Agentic

It's well known that VCs and founders may not always have the same goals and this remains one of the most common talking points in the 'bootstrap vs VC' debate. I think the debate is largely trite, but it's a valid point to consider.

I wrote earlier about how VCs inadvertently created the faux-genius archetype – of the visionary dropout or the eccentric founder – to help craft compelling narratives and justify risky, high-stakes investments. In the same way, VCs may dictate and/or respond to market trends to generate self-interested outcomes. However, it's worth remembering that the VCs and founders are reliant on each other and should interact fairly (theoretically) in pursuit of a positive-sum outcome.

I find the VC-founder-market relationship quite interesting and reflective of classic game theory dynamics: comprising the principal-agent problem, asymmetric information, signalling and sequential games. The principal-agent problem occurs when one party (the agent) is supposed to act in the best interest of another (the principal), but misaligned incentives lead the agent to prioritise their own goals instead. FOMO plays an especially crucial role in this tripartite relationship: VCs fear missing out on investing, founders fear miss out on building, and users don't want to miss out on adopting early enough.

The emergence of AI agents, whilst impressive, appears disproportionate to the ongoing market hype. Founders and VCs continually search for the next potentially transformative technology due to similar but different FOMO influences mentioned above. We've witnessed this pattern with LLMs/chatbots, crypto and the metaverse. I don't doubt that autonomous agents could transform how certain processes are executed, but it's worth being relatively cautious about lofty claims surrounding market sizing or valuations.

AI no doubt remains a revolutionary technology, but deciding to build a product around an ongoing trend – solely for the sake of it – presents significant risks. If choosing to deviate from sector trends, one should do so with conviction.

Are We There Yet?

I've worked on bringing a few ideas to life over the past 2 years. None materialised, but I learned a lot. I've ranked these based on how far along, how scalable, and most notable lessons learnt:

Brade: a plug-and-play accounting tool for salons and other beauty services. We built Brade to make reconciling multi-channel payments, forecasting finances and filing taxes 10x cheaper and 100x easier. People found the product intriguing, but either a) preferred one software for daily ops or b) trusted their human accounts far more; both are reasonable reasons.

Pubbler: a self-learning assistant specifically for Substack writers built on Anthropic's Claude Sonnet model. Despite positive waitlist signalling, I decided not to launch for other reasons.

Banga: a WhatsApp booking agent for restaurants. Banga cut total booking time by 80% during testing – from initial message to email confirmations. The value-add was clear, but the business model wasn't.

Sasa: a dating app for university students with two modes: "Good Time" and "Long Time". We pitched this at a Valentine's Day event and got 0 signups.

REN: a wearable ring for individuals struggling with addictions to discreetly seek help via touch sensor & BLE. REN was designed based on Prochaska and DiClemente's transtheoretical model of behavioral change.

Shora: a digital health platform connecting EHRs (patients) and EMRs (hospitals) for better travel healthcare. A lot of things were rather unclear here, but as expected for my first-ever 'venture'.

Soundtrack: a 'song a day' social media platform based on the tried-and-tested streak model by BeReal. Soundtrack didn't launch, but I assisted & advised on design, product and GTM.

Orchard: an anonymous online community for Gen-Z users to get mental health advice without stigma. This wasn't a great idea, and the business model wasn't clear either.

Staycey: a swipe-to-book hotel search engine built on Booking.com and Expedia's APIs. People really really (really) did not want this for some reason.

Almanac: an AMA, Reddit-style forum for student-alumni networking & mentorship. I built this specifically for my alma mater, but it unfortunately wasn't picked up.

It's uncertain why some of these didn't succeed as expected. In some cases, it was simply a matter of inertia. In other instances, it's quite possible we 'overniched' and were just 1–2 pivots away from getting it right.

Last Laugh

I've been watching sitcoms for as long as I can remember. I watched Disney Channel classics such as Suite Life and Wizards very early on, before discovering the likes of The Big Bang Theory and Brooklyn Nine-Nine much later on.

As I've known them, sitcoms include: ~20 minute episodes, an ensemble cast (each with unique personality traits) and relatively lighthearted subject matter with minimal continuity. Interestingly, the 'sit' in sitcom stands for 'situational', with the plotlines often revolving around the recurring settings.

I'm almost always watching one sitcom or the other - oftentimes rewatching one as I wait for another to be released. The interesting thing about watching (or rewatching) sitcoms from multiple generations isn't necessarily about the character formation or storylines - although the newer sitcoms are notably more inclusive - but more to do with the evolution of the sitcom itself.

Over the last few months, I've been particularly curious about digital dualism: the belief that online and offline are separate and distinct realms. I'd consider myself a digital monist; in other words, I believe the two realms are becoming one, or may be so already. Now, whilst sitcoms don't necessarily classify as 'online', the evolution of sitcoms may actualise Oscar Wilde's assertion that "life imitates art far more than art imitates life".

In thinking about the overlap between sitcoms and 'the digital', I began considering the evolution of sitcoms as an art form specifically from 1990 onwards. For example, Carrie's blog in Sex and The City in the 90s was still interesting without significantly affecting the characters and their respective subplots. This is in stark comparison to iCarly, where although targeted towards a younger demographic, utilised livestreaming as central to the show.

The evolution over the last three decades can be explained in part by Baudrillard's "simulacra" which represents a hyperreality where the distinction between real and artificial has collapsed. The simulacra/simulation framework doesn't completely capture the sitcom's progression, but it provides a good basis to work from. The four stages, originally posited by Baudrillard, may better align with ongoing discourse on media & journalism and politics.

Nonetheless, early sitcoms in the late-90s to early-00s like Friends and That '70s Show align well with Baudrillard's first stage. The shows are evidently constructed, with artificially engineered interior design and live-studio laugh tracks which amplified their fictionality. The artifice was overt, but viewers willingly accepted it as a reflection of reality.

Shows like HIMYM and Two and a Half Men followed shortly after in the mid-00s to mid-10s. These shows were slightly more layered, and attempt to blur the boundary between fiction and reality through emotional depth and storytelling complexity. Yet, these shows maintained their simulative qualities through the laugh track.

However, this era also marked a transitionary period which aligns with Baudrillard's third stage. The Office and Parks and Recreation emerged, and challenged the notion of sitcoms revolving around friends and/or family through the 'mockumentary' style. By using the aforementioned sitcom-esque elements, these shows offered a lens into a reality that, whilst previously not attached to the genre, is one all too familiar – the workplace. The absence of the laugh track, combined with the shift in settings and the documentary style certainly heightened the sense of reality. Hence, the shows and their characters are deemed more 'relatable' and the distinction between real and simulated faded even further.

The mid-2010s marked a revolutionary period for sitcoms, and what I would class a 'golden age' for the genre were one to ever exist. However, as with most other fields following paradigm shifts, the genre changed notably after the Schurean Revolution.

These days, contemporary shows like Ted Lasso and Only Murders in the Building are still classed as sitcoms although without many of the core characteristics that previously define the genre. As I see it, sitcoms aren't defined merely by recurring settings or comedic writing alone, but a combination of multiple elements. Although not the case with all shows, we're witnessing a blurring of genres where shows like Hacks and Shrinking adopt the length and emotional depth of equally hyperrealistic shows like Severance. I agree that certain comedic tropes which were once popular in the 1990s–2000s are misaligned with our conceptions of reality and our consumption patterns. However, I worry the age of the sitcom, which once served as much-needed lighthearted form of escapism, is fading before our eyes.

Visionary Capital

Startup culture heavily relies on certain narratives and messaging. The stories of near-death fundraises, office-floor sleeping bags, and last-ditch pivots are retold with increasing drama on every podcast and at every fireside chat. The implication that most founding stories need to be interesting – containing Shakespearean elements of struggle, drama or betrayal – is especially harmful for young entrepreneurs.

The 'visionary' archetype has subsequently emerged as a natural idol created by the VC/startup ecosystem. This mythologised figure aligns almost perfectly with the overarching narrative: contrarian, eccentric, egotistical and slightly controversial. The archetype has since become aspirational: for VCs to spot, and for first-time founders to become.

It's certainly important to dream big. Many of our greatest innovations exist not because certain people saw what others couldn't, but ultimately pursued bold and daring ideas with high conviction. You would feel on top of the world too if a big bet paid off, made you millions, and changed how entire industries operate. Anyone would feel on top of the world especially if the big bet paid off despite others doubting you. However, pioneers were previously deemed visionaries only after achieving success. As a result, there were fewer visionaries, and thankfully fewer thought leaders on podcasts without a leg to stand on.

Today, we're experiencing technological innovations of an unprecedented nature. Most notably, the cost of building products – financially and otherwise – has decreased massively, and we're seeing a rise in competition across multiple sectors. In the same way there are thousands of products being launched each month, there are hundreds of thousands of founders emerging out there. It's easier than ever to start a company, but building products isn't enough anymore to stand out. So, now we have more and more self-proclaimed visionaries than ever, either over-dramatising aspects of startup culture and/or simply regurgitating dictums by industry veterans – or attempting to debunk them for the sake of engagement, i.e. 'ragebaiting'. Perhaps the real artificial intelligence we should be worried about already exists on tech Twitter.

There's nothing inherently wrong with thought leadership, but when it primarily serves to validate the pursuit of ego over impact, we're creating a dangerous template for the next generation. This isn't new either – for the longest while, this idealised archetype has involved grandiose thinking, inflated sense of special destiny, and godlike power excused as necessary quirks of visionary leadership. Oftentimes, these are revealed to be nothing more than a facade, masking fragile egos that have somehow amassed outsized influence.

However, unlike other industries where unchecked hubris might only affect a company, technology's unprecedented reach means ego-driven founders can affect billions of lives across healthcare, education, journalism, and beyond. The same personality traits that might be harmless (or even helpful) in building a photo-sharing app become concerning when applied to reshaping democracy or developing artificial intelligence.

While previous generations of industrialists could influence how we work or travel, today's tech leaders can reshape how we think, connect, and understand reality itself. Perhaps it's time to acknowledge that the stakes are too high for ego to lead the way.

Discontent

In their original forms, the likes of MySpace, Facebook, and Twitter provided users with platforms for connection-building through growing their network, adding friends, and maintaining relationships. Hence, social networks. These days, TikTok, Instagram, and others are primarily content vehicles, optimised not for connection but for consumption and, increasingly, monetisation. Hence, social media.

There's a powerful feedback loop between language and behaviour, and the terministic shift from social network to social media certainly transformed user behaviour. Once platforms became framed as 'media', the focus fundamentally shifted from "who do I know?" to "what can I share/engage with?" and success became measured in views/likes rather than meaningful connections. The language didn't just describe the transformation of these platforms, but instead helped enable and accelerate it; in the same way calling something a "news feed" versus a "friend updates" subtly shapes how we approach and consume that information.

More than ever, we're in a state of perpetual performance through meticulously engineered attempts at authenticity. The 'photo dumps' — ostensibly casual collections of unfiltered moments — are ironically often more curated than traditional posts. In my view, the worst of these are 'morning routine' videos on TikTok. I get secondhand exhaustion from realising people are waking up to position their phones only to crawl back into bed to record the start of the day. Every "authentic" moment filled with oat milk lattes and randomly perfectly arranged dinner bowls requires extensive preparation, multiple takes, careful curation and constant editing.

In a constant cycle of content creation or consumption, therein lies a pressure to participate in this aspirational theatre that is digital performance. Indeed, this phenomenon extends into every conceivable niche. I've experienced this myself — as a DJ years ago or more recently in powerlifting — where there is an immediate expectation to create content, or want to build a personal brand and monetise these interests.

I believe the constant pressure to perform alongside the abundance of consumable content has enabled the commodification of human experience. I think some types of content are worth monetising: cooking tutorials, fitness guidance, financial advice. But, we've extended beyond these into innumerable clusters where various aspects of the ordinary human lifestyle demand documentation and distribution. First dates. Grief. Self-care.

It's rather homogenous, and subsequently quite exhausting.

The issue at hand supersedes the existence of social media platforms, but their impacts on our lives offline. There are now more 'Instagrammable' spaces than ever: focused on aesthetics and faux-ambience void of natural human insight.

More notably, I find it interesting how intimate moments like gender reveal parties, weddings, and baby announcements have turned from sacred and personal to opportunities for documentation and distribution. Are we not exhausted from the constant curation?

The issue isn't documentation itself — we're privileged to be able to capture these moments — but my issue lies with documentation for posting & engagement, rather than memory preservation. I attended a talk last month where the speaker asked if we would do things knowing no one could like, comment or reshare… the silence was deafening.

Instagram 10 years ago was more normal. I personally preferred it when not everyone could be, wanted to, or felt like they had to be a celebrity. A lot of people have idolised celebrities for decades, but now it's easier than ever to become one. Perhaps this represents a scary paradigm where digital maincharacterism has morphed into a hyper-individualism which exists offline?

Twenty

In Yoruba culture, proverbs (òwe) reflect a worldview of pragmatic realism through encoded insights about human nature, virtues and cosmic order. These sayings serve as vessels of generational wisdom which transform lived experience into actionable guidance. This philosophy aligns with a broader African saying "An elder sees sitting down what a child cannot see standing up."

Recently I've been contending with "Ogún ọmọdé kìí ṣeré ogún ọdún". In other words, twenty children cannot play together for twenty years. I suppose it can be considered a simple (but harsh) truth.

I heard that saying for the first time just over a decade ago, months following my elementary school graduation. At the time, I understood it as nothing more than a poignant observation about how time and circumstances naturally separate even the closest bonds. My friends and I had branched out to different schools across the country, many of those bonds would not remain the same, and we would inevitably each form new ones.

10 years later, the word 'friend' means something completely different to me - and I'm sure many of you can relate. In the same way, I no longer resonate with the proverb.

If you've been following these essays, you might've noticed a thematic interest in how language shapes thought. In this case, I believe the specific juxtaposition of 'children' and 'play' does more than just mark time's passage, but also reinforces the disparity in age/wisdom - perhaps to trivialise the nature of child-like companionship and social bonding.

While I appreciate Yoruba philosophy's staunch realism, I find myself questioning the absence of a counter-narrative to this hyperbolic adage. Yes, twenty children cannot play together for twenty years - I understood this even as a child. But perhaps the more interesting question is whether a handful of connections, carefully chosen and deliberately maintained, can defy the proverb's implied inevitability.

Many of my friends are also from Lagos, Nigeria, but are scattered across the world. Ideally this would not be the case, but it's in one's best interest to pursue education and career opportunities elsewhere if able to do so. However, this dispersion only amplifies the difficulties in nurturing friendships. Hence, this paradigm is extremely intriguing on a personal level - more so considering the timely festive season where many of us return home.

Despite my friends and I being dispersed across different countries over the years, I am glad to still have friendships spanning 5, 10, and even 20 years. I am therefore able to provide some firsthand perspective.

All adult relationships are fundamentally rooted in choice: we actively choose to begin, maintain, or end relationships. These endings typically stem from either situational or personal changes; this aligns with Parfit's 'Relation R,' regarding the effect of psychological continuity/connectedness in relationship dynamics. British philosopher Derek Parfit argues that personal identity isn't what matters in survival – but what truly matters is psychological continuity and connectedness. When applied to friendship, particularly in the context of the diaspora, this framework illustrates why maintaining long-term connections is so complex. It's not necessarily about staying in touch, but about maintaining connection with someone who, like you, is constantly evolving under vastly different influences.

Yeah, Write

I started my first blog at 7. Tumblr was pretty big at the time, and I randomly found myself writing about the football transfer market a few times a week. For some reason, people liked what I had to say. Till date, I can't understand why, and I wonder if they would've been interested had they known my age at the time. I didn't particularly enjoy writing, nor did I have any intention of becoming a sports analyst. Truthfully, I don't think I knew what I was even doing. I simply enjoyed something, found myself writing, and enjoyed what I was writing about.

Each day, we tend to write more words than we speak. Consider emails, texts, tweets, comments, to-do lists, meeting and/or class notes, etc. In most cases, we don't enjoy writing. Plainly put, writing (or typing rather) can feel tiresome and possibly annoying — emails probably top the list. Currently, I and many others use ChatGPT or Claude to write responses to emails. In the near future, tools like Friday Mail and Superhuman will respond to emails for us automatically. This isn't a bad thing by any means. Emails can be tiresome, and I'm a huge advocate of finding tools to make one's day-to-day far more efficient. Also, the advent of smarter email tools probably has relatively insignificant consequences long-term.

Assuming these succeed —and they likely will: emails are a burden— the obvious next step is more tools to make writing even 'easier'. Consider a text-messaging tool which has been trained on your personality, writing style and relational context. Or a tool that generates a shopping list based on a picture of your fridge. Not to mention, AI agents that could generate industry-standard reports based on auto-replied emails and nifty web scraping.

On the surface, these few examples would likely be helpful and save time. But, I worry we'll simply find ourselves writing less and less for ourselves and each other.

The reason we may write less is not because of these new tools. Instead, these tools have emerged from a perception of daily writing as burdensome or laborious. I think we'll write less in general, and there's enough to indicate we'll use brain-computer interfaces in many daily tasks over the next two decades. Of course, the rate at which this occurs depends on the technology itself. If I had to guess based on the last few years, I expect it would begin at a linear pace and then accelerate exponentially as the infrastructure to build such tools becomes more accessible.

Should this occur, I don't believe we will completely forego writing over time nor will we risk significant evolutionary regression or cognitive atrophy. Instead, we will likely 'write' with more advanced means. At its most basic level, writing is an extremely cognitive process, and we'll still require degrees of cognition in the future — whether for prompt engineering with chatbots or text conversion with neural implants.

My concern essentially involves the dichotomy between generated text and written text; we use both on a daily basis. The former implies I/O processes: email responses, meeting notes, to-do lists, etc. These are systematic processes which prioritise precision, accuracy and efficiency. It's easy to see why these are the first wave of productivity tools.

African Intelligence

Afrobeats has grown exponentially in the decades since its origin. Multiple artists sell out arenas globally, the genre consistently garners billions of streams each year, and the Recording Academy recently created a new category specifically for African music.

However, Afrobeats has changed significantly, leaving most long-time listeners of the genre with mixed feelings about this. Why? Afrobeats is quite personal to our culture, and the trade-off between authenticity and commercialisation feels antinomical. The evolution of any genre is inevitable, but this becomes trickier once it becomes a trade-off between authenticity and mainstream appeal. I'm not surprised by this and I'm sure others aren't either. Nigerian culture has generally become more popular over the last two decades, with our film, cuisine, and art gaining traction in global spaces.

Therefore, it makes sense that Afrobeats follows suit within the ongoing globalisation of Nigerian culture. Although this increased exposure should be a good thing, I remain skeptical. Our artists are more able than ever to achieve market-fit due to this increased recognition, playing a critical role in cementing our positioning as the 'cool kids' of Africa. However, it is worth noting that this rapid growth was not serendipitous or purely random.

For the longest time, the core lever for Afrobeats' commercialisation has been through features and collaborations with other artists. On the surface, it's a seemingly innocent 'quid pro quo' which promises growth for both parties. Instead, I feel it's been an exploitative dynamic akin to the resource extraction observed in former colonial states. We could discuss the symbolism of Wizkid's 'collaboration' with Drake on One Dance, where his voice can barely be heard. Or, perhaps Beyoncé's The Gift which featured primarily Afrobeats artists and yet conveniently included no African countries on the tour.

One could argue that the tables seem to have turned, as we now see many Afrobeats artists featuring foreign artists on their own projects. Case in point: global acts like Justin Bieber, Nicki Minaj, and Chris Brown have been featured on multiple Afrobeats projects over the last 5 years. I believe the initial quid pro quo stands, but we falsely believe we're in the driver's seat. Afrobeats is trendy, and this time it conveniently benefits the foreign (primarily American) artists to be featured within the genre, instead of vice versa.

I am not as concerned about all aspects of Nigerian culture losing their authenticity. In many cases, some degree of cultural synthesis can be beneficial. Nollywood, for example, could benefit from raising its standard to match the production quality of its counterparts. Likewise, I've long dreamt of Nigerian-fusion dishes which are finally becoming possible through increased exposure. However, I am primarily skeptical regarding the influence of cultural convergence with Afrobeats due to the propensity for its assetisation. Assetisation is a key aspect of technoscientific capitalism wherein non-financial mediums are transformed into financial assets which allow for investments and subsequent returns. Therefore, the newly converted asset becomes a mechanism which can be controlled, traded, and capitalised through the revenue stream.

Cide Effects

TW: suicide

According to Samaritans, 1 in 5 individuals have had thoughts of ending their lives. In other words, a fleeting thought about ending one's life is something that most have experienced or may likely experience at some point. The reasons why people end their lives are numerous, complex, and ultimately not my focus. That being said, there is help available, and I have attached this list of helplines provided by the MIND Charity.

Earlier this year, I began examining whether the word 'suicide' influenced its stigma. Its etymology — coming from the Latin suicidium (sui - "of oneself" and caedere - "to kill") — replaced the more accusatory 'self-murder.' The term 'suicide' had become established in the English language by the mid-18th century, resonating with earlier terms like suicist and suicism which were rooted in notions of selfishness — a prejudice that persists today. Today, the term 'suicide' has false associations of despair and futility and failure with ending one's life; I believe this stigma is perpetuated by the 'S word' itself.

Catherine Ruff provides some useful perspectives on suicide during the Stoic era. At first glance, Stoicism appears at odds with suicide through its teachings of resilience, dignified endurance of hardships, and apatheia. Yet, Zeno, the founder of Stoicism, strangled himself to death. Similarly, historical literature has depicted self-inflicted deaths without stigma, and in some instances, as acts of heroism: from biblical Saul to Shakespeare's Ophelia. Similarly, Cato the Younger, Socrates, Brutus, Cassius, and Mark Antony's deaths were viewed as acts of martyrdom.

The way language shapes our understanding becomes clear when we consider how the '-cide' suffix in suicide aligns with words like homicide, infanticide, and genocide, which all denote murder rather than mere death. This perpetuation occurs through language transmission — the word's centuries-long evolution — and the sociological beliefs embedded within its generational use. Linguistic determinism suggests that the language used by a group influences their thoughts, perceptions, and worldview, ultimately shaping how individuals understand and experience reality. Many people falsely perceive suicide as 'giving up' or 'ending one's future' often while connoting that those ending their lives — or even contemplating ending their lives — are unable to find enjoyment and/or purpose.

Research has consistently revealed that effective support and prevention can be achieved without condescension or infantilising approaches, proving far more effective than well-intentioned but judgmental hyper-optimistic rhetoric. I suppose future prevention means replacing the outdated term; perhaps this may eradicate its stigma once and for all.