top of page
Connecting Dots

> Search Results

29 items found for ""

  • Microsoft's Enshittification of Everything

    Since the release of Windows 11 in late 2021, Microsoft has made a series of increasingly unpopular changes to the Windows ecosystem, bringing into question the future of the operating system. In February 2023, Microsoft unveiled Copilot, a generative AI chatbot that served to replace Microsoft’s Cortana. While Microsoft previously claimed more than 150 million  Windows users used Cortana, that estimate was probably more than a little optimistic.  Cortana’s deep integration into Windows 10 allowed users to more effectively perform tasks with voice commands, but it lacked a clear path to monetization; that is, Microsoft couldn’t figure out how to turn Cortana into a paid service. With Copilot’s introduction in early 2023, development of Cortana ended and Microsoft redirected its Cortana resources into Copilot. While Apple makes use of ChatGPT’s AI models for its own Apple Intelligence product, Copilot is developed by GitHub (a subsidiary of Microsoft) and is powered by OpenAI’s Codex AI model.  Having joined the big tech AI chorus, Microsoft is placing Copilot and generative AI front-and-center in its consumer-facing product portfolio. Surprisingly, Microsoft is also pushing its AI agenda in business and enterprise-focused products, like Windows Azure, Entra, and Microsoft 365 for Enterprise. Considering enterprise customers are traditionally more conservative and adhere to slower adoption and upgrade cycles, Microsoft’s aggressive AI push across all fronts seems a risky gambit.  As with other consumer-facing AI services, Microsoft has positioned Copilot as a “freemium” product, offering a free tier with more limited features and a paid tier with more advanced features. To bolster this agenda, Microsoft, following in Apple’s footsteps, revealed its plans to introduce ARM-powered Windows PCs with deep Copilot integration, under the Copilot+ moniker.  Perhaps the most controversial feature of Microsoft’s Copilot+ platform is Recall , a feature that Microsoft claimed would let you search everything on your PC using natural language, thereby removing traditional barriers to finding changes you’ve made to documents, edits to photos, and so on. It didn’t take long, however, for Recall to be skewered by security researchers as a cybersecurity nightmare; relying on an unencrypted database of screenshots, Recall actually wasn’t secure at all. Anyone with local access to the computer could easily exfiltrate this database of screenshots, containing untold troves of sensitive user data.  Privacy implications for individual users aside, more questions were raised about compliance and data security in corporate and government environments. There were too many unanswered questions about the technical implementation of Recall, how easy it would be to disable, whether it would stay disabled once turned off, or how system administrators would deal with managing it at scale. Microsoft said precious little about the issue of data security and user privacy until  public backlash forced them to push back Recall’s release date  until an unknown future date.  Microsoft’s obvious bungling of Recall’s technical implementation and its initial retrenchment when faced with public criticism speaks to more insidious, deeply-ingrained problems at the company. While layoffs are common throughout the tech industry, Microsoft has often been at the fore when it comes to dismissing entire teams and divisions within the company. After finalizing its Activision-Blizzard deal in October 2023, Microsoft fired 2000 employees from its gaming division, or about 10% of all employees within the gaming unit. During the first nine months of 2023, Microsoft reduced its workforce by 16,000, outstripping the 10,000 layoffs it forecast at the beginning of 2023. In reducing its gaming unit headcount, Microsoft shuttered multiple game studios , including Arkane Austin, Tango Gameworks, and Alpha Dog Games. Microsoft’s treatment of its gaming division is only a microcosm of the wider video game industry’s treatment of its own talent: in February 2024, Sony fired 900 employees from its Playstation division, and Take-Two Interactive (parent company of Rockstar Games) announced plans to cut its workforce by 5% and end development on several games. None of this apparent dysfunction is really shocking, considering corporate acquisitions inevitably result in mass layoffs– roughly 30% of employees are deemed redundant when companies in the same industry merge . We can’t hold Microsoft to a separate standard for post-merger practices, considering the fetish for layoffs is one that’s shared throughout the Fortune 500. On the other hand, in spite of Microsoft’s massive war chest and their appetite for acquiring companies and intellectual property, its cybersecurity practices are in an apparent state of free-fall.  AJ Grotto, former White House cyber policy director, claims Microsoft is a “national security threat” , due to their monopoly position within the industry, especially within the realm of government IT. In June 2023, Chinese government-backed agents engaged in an attack on Microsoft Exchange Online,  facilitated by Microsoft’s lackadaisical security policies, leading to the U.S. Cybersecurity and Infrastructure Security Agency to demand “fundamental, security-focused reforms” to happen immediately at Microsoft. On April 2, 2024, CISA issued an emergency directive calling for immediate remediation  of a major security breach, involving Russian state actors exfiltrating data from Microsoft email systems. The CISA writes in its directive:  “The Russian state-sponsored cyber actor known as Midnight Blizzard has exfiltrated email correspondence between Federal Civilian Executive Branch (FCEB) agencies and Microsoft through a successful compromise of Microsoft corporate email accounts. The threat actor is using information initially exfiltrated from the corporate email systems, including authentication details shared between Microsoft customers and Microsoft by email, to gain, or attempt to gain, additional access to Microsoft customer systems. According to Microsoft, Midnight Blizzard has increased the volume of some aspects of the intrusion campaign, such as password sprays, by as much as 10-fold in February, compared to an already large volume seen in January 2024.” Microsoft’s position on internal cybersecurity practices seemingly hasn’t changed, in spite of CEO Satya Nadella’s commentary on Microsoft’s broken security culture.  Nadella said,  “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security. In some cases, this will mean prioritizing security above other things we do, such as releasing new features or providing ongoing support for legacy systems.” Microsoft’s various commitments to improved security across its customer-facing products seem more like nebulous promises, while its position on internal security falls somewhere between “scattershot” and “completely undefined”. Microsoft’s massive size, knowledge siloing, and responsibility to maintain huge parcels of legacy code likely all contribute to the brokenness of its internal and external cybersecurity practices. An organization the size of Microsoft requires massive investments in cybersecurity, security-awareness training for teams across all units, and external audits to demonstrate security practices are actually being followed. So far, though, Microsoft’s only incentive to improve security practices is the threat of losing market share to competitors; in government IT, which accounts for a significant portion of Microsoft’s revenue, there is no competition. Meanwhile, the U.S. government has proven itself toothless in handing down reprimands that actually hurt serial security and privacy offenders. Such fines are considered to be the cost of doing business for companies like Microsoft.  Let’s consider, then, the likelihood of the following two scenarios: (1). a serious competitor to Microsoft will appear within the next few years, and will force Microsoft to change its security and privacy practices, lower prices, and listen to customer feedback; (2). U.S. regulatory agencies will hand down multibillion dollar fines to companies like Microsoft that serve the purpose of substantially damaging the company’s financials if they fail to comply with industry regulations. Given the direction of our political institutions, including the Supreme Court, the odds of the U.S. government holding abusive monopolies to account seem poor.  Likewise, the odds of a serious competitor to Microsoft emerging anytime soon are remote at best. It seems obvious that we, as tech consumers, are arriving at a crossroads where we have to reconsider our relationships with companies like Microsoft. As more productivity software becomes web-based and the average person’s need for processing power and storage decline, Microsoft’s Windows hegemony appears precarious.  Microsoft is cognizant of the changing dynamics of end-user computing, of course, which is why it is positioning itself as a services company, not the boxed software outfit it used to be. The longer term strategy at Microsoft may well be to convert Windows itself into a monthly or yearly subscription,  if its efforts to monetize mined data from current Windows installations don’t pay the dividends it wants .  Microsoft’s whipsawing of Windows users on the issue of local user account creation in Windows 11 ties into the general enshittification of Microsoft products. Despite some changes in its stance on the matter of local accounts versus Microsoft accounts, Microsoft’s long term strategy with Windows 11 has been to discourage users from creating local user accounts when setting up Windows . While Microsoft accounts were previously optional, they are all but mandatory now; this mandate puts users who don’t have or don’t want a Microsoft account in a compromising position. More troublingly, Microsoft’s decision to make local user accounts optional only in more expensive (or unavailable to the general public) versions of Windows raises more questions about the company’s abuse of its monopoly position in the consumer computing space.  Previously available workarounds to avoid Microsoft’s account dictate are slowly being stamped out, leaving users with fewer options to use a local Windows account. In a disturbing twist, this online account mandate means that if a user’s computer doesn’t ship with compatible networking drivers, the machine can’t connect to the Internet during setup and a local account option is unavailable, leaving the user in a sort of Purgatory until compatible drivers can be integrated into a custom Windows image, which is far outside the average user’s technical capacity.  As things stand, a confluence of poor practices, anti-consumer policies, and monopoly abuse have put Microsoft in a position where governments and enterprises increasingly question their competency, and end users question the need for Windows at all. Microsoft may not have meaningful competition in government and enterprise IT, but its behavior will hand its competitors all the rope they need to hang Microsoft. While Microsoft may envision a future in which it can double-dip by monetizing user data and converting its entire portfolio to monthly subscriptions (see: Adobe ), it fails to properly heed the rising threats of Apple’s macOS and Google’s ChromeOS. In January 2013,  Microsoft Windows maintained 91% of the desktop operating system market; in November 2023, that percentage had fallen to 72% . During that same time period, Linux’s market share has grown from less than 1% to over 4%, and ChromeOS (which is based on Linux) has become a juggernaut in educational settings.  Microsoft’s insistence on ignoring the user experience, milking its government and enterprise clients for all they’re worth, and antagonizing the federal government by failing to secure its own infrastructure is leading it, and us, down the garden path to oblivion. As enshittification within the tech space accelerates, we have to reconsider what our data security and privacy are worth. A false sense of convenience has led the average user to change the way they value ownership, security, and privacy while the stakes in cybersecurity have never been higher.  Hyper-normalization of data theft, foreign espionage, and state-backed cyberattacks have led people to expect and accept piss-poor behavior from giant tech companies at a time when these companies should be held to higher standards rather than excused from any real liability.  If you do what you’ve always done, you get what you’ve always gotten. It’s time to abandon bad platforms and reject bad policies, even if it is temporarily inconvenient. Watchdog groups are toothless, and the government certainly won’t do it for you.

  • Seen and Unseen: The AI You Know, The AI You Don't

    While corporate America carries out its work of finding new ways to put artificial intelligence and large language models in places they shouldn’t be, the utility of discriminative AI goes largely unheralded in the public consciousness. Companies like Microsoft and Google are spending billions marketing generative AI. Generative AI deals with the creation of new data that resembles existing data (for instance, creating a painting which is derivative of an existing work). Discriminative AI, meanwhile, deals with the classification and recognition of existing data. Discriminative models rely on supervised learning, where datasets fed into the model are labeled and in which each data point corresponds to a label or category. Discriminative AI models are used in applications such as facial recognition, spam email filtering, image recognition, and sentiment analysis. Generative AI, meanwhile, is employed in the creation of realistic images and videos, new musical compositions, and text generation (for example, writing an essay or email on behalf of a human user). Chatbots are based on a specific type of AI known as large language modeling (LLM). Large language models are trained on massive text datasets and are particularly adept at translation, text generation, text summarization, and conversational question-answering. Google Gemini, for instance, is a collection of LLMs that functions as a conversational chatbot–when you give Gemini a prompt, it replies in a ‘conversational’ way to approximate an interaction with another person. As a result of generative AI’s novelty and its wide array of applications in the average user’s life, Big Tech has seized on inserting it into everything from web search to online chatbots. While we struggle to fully appreciate the longer term consequences of hastily deploying generative AI models in every corner of our lives, the momentum of AI only grows. There doesn’t seem to be a consensus on the matter of whether AGI (artificial general intelligence) will ever materialize, or if it does, what risks it poses to humanity; at present, we’re still in generative AI’s infancy and our regulatory approach is still very much in flux. Currently, AGI is still a hypothetical; theoretically, an AGI model could reason, synthesize information, solve problems, and adapt to changing external conditions like a human can. The risks inherent with such a technically capable form of AI are hard to understate. In short, when it comes to AGI: we don’t know what we don’t know. An AGI that outsmarts its creators could become impossible to control, leading to a potentially devastating sequence of unintended consequences for humanity. An AGI that decides its values don’t align with human values, for example, could shut down power grids, launch massive cyberattacks against allied or enemy nations, and be used as a powerful tool in disinformation and social manipulation campaigns. Again, these concerns are all theoretical, but when we consider the rate at which AI as a computer science discipline is evolving, we shouldn’t discount the possibility of a future AGI. The moral-ethical and regulatory concerns surrounding AI mount by the day, and state governments are only now getting to grips with what regulating generative AI in particular will entail. The Connecticut State Senate recently introduced legislation to control bias in AI decision-making and to protect people from manufactured videos and deepfakes. The state is one of the first in the U.S. to introduce legislation targeting AI, and the usual cadre of mostly Republican opponents has seized the opportunity to claim the bill will “stifle innovation” and “harm small business”. How, exactly, regulating generative AI will negatively affect the average small business in any meaningful way remains to be seen. But even so, opponents to the bill may have another point: complex legislation on a complex and rapidly-evolving subject is going to bring with it unintended consequences. We find ourselves in an increasingly dire situation, then, where our legislators–largely geriatric and unplugged from the modern technological zeitgeist–are writing ineffective legislation on matters they don’t even peripherally grasp. In fact, most computer science graduates working in their respective fields didn’t specialize in artificial intelligence, so we’re really desperately relying on a vanishingly small percentage of the population to navigate these fiendishly complex issues. AI and ML are rapidly evolving fields and, owing to their popularity, more students majoring in computer science are now focusing on AI, but there is a significant lag between graduating with a specialization and becoming an expert in that specialty. Generative AI in the form of Alexa telling us a joke is the thing we see, but the reality of trying to manage a world in which AI has embedded itself is the thing we don’t. Since the Internet’s rise to ubiquity during the 1990s, legislation and regulation have lagged behind the explosive growth of technological advancement. We’ve been fighting an uphill battle to try to elect representatives who understand technology from a voter base that also largely doesn’t understand technology very well. As this rate of change accelerates within the disciplines of AI and machine learning, we need experts in these fields who can respond effectively to these changes. In short, we are running up against the limits of effective governance when those doing the governing aren’t digitally literate. Of equal concern is the idea that many of our aging representatives are surrounded by legions of aides and advisors who may well whisper in their ears that AI needs no regulation while those same advisors buy stakes in companies developing their own AI models. The recalcitrance that members of the Republican party have displayed on the subject of effective AI regulation is par for the course, but in this particular case their oppositional defiance is uniquely dangerous to the public. AI is a Pandora’s Box–we don’t know how an AI model will hallucinate or how far disinformation generated by AI will spread before a human hits the kill switch. By integrating generative AI into the social fabric, we’re essentially entrusting humanity’s combined effort and treasure to an entity that has to be constantly managed, reviewed, and course-corrected to behave in a sane and predictable way. This is a more monumental task than most people who only have a peripheral understanding of AI seem to realize. The meteoric rise of machine learning within the field of AI seems also to be ushering in a new kind of societal disparity: technological. Those who control the algorithms that make up the body of AI will have a certain degree of power over virtually every aspect of human life; maybe this was the endgame that companies like Meta, Alphabet, and Microsoft had in mind since the outset. As we discussed in an earlier article about YouTube’s methodology for promoting, recommending, and suppressing video content, how effectively can we regulate an industry when most of its doors are sealed to the public? It becomes increasingly clear that Big Tech is expediting its work in separating itself from society; they’ve spent the last 20 years digging a moat and creating a fiefdom that operates beyond the grips of the law. As companies like IBM, an AI forerunner in its own right, expand their influence by buying or killing the competition, power within Big Tech becomes more consolidated and key decisions in the realm of AI are made by fewer and fewer people. Maybe all of this has less to do with AI and more to do with the notion that tech companies have developed a kind of power that the world hasn’t yet seen: the power to effectively manipulate reality. If we’re all living in The Truman Show and we don’t even know it, how would we know anything is wrong? Or, maybe we’re allowed to know there are problems with AI but only in a superficial sense. When algorithms guide you along a set of tram rails it must be asked: are these merely suggestions by the algorithm, or are they neatly packaged directives? On the other hand, discriminative AI works comparatively quietly in the background and to much less public fanfare, processing massive datasets that enable so many of the services we now take for granted. And there’s good and valuable work to be done here: as the Internet grows in size, so, too, does the volume of data companies and individuals have to manage and contextualize. Without discriminative AI models, not many of the digital experiences we enjoy would be possible. Even with AI, the amount of data that the rapidly growing number of devices on the Internet generates raises serious manageability questions for the future. There are nearly endless applications for discriminative AI in science, medicine, biotechnology, meteorology, climatology, and a number of other hard-science disciplines. As with so many evolving technologies in life, there are important, practical uses for AI in science, research, and engineering, but the potential for abuse on the consumer-facing side is so staggering that effective legislation really can’t come soon enough. AI is a tool like any other. What we have to contend with in Big Tech is not so much limited to AI; we have to contend with a group of self-appointed technocrats who have, time after time, shown total disdain for the public good. The list of companies who openly sell your personal data to third parties (when they aren’t losing that data to cyberattacks, that is) is long and ignominious. These are the companies who present users with 157-page Terms of Service agreements which in any other context would call for review by a lawyer. The same companies who can deplatform people or groups they find personally disagreeable. The very companies who can freeze your funds, revoke your domain, shut down your email, or delete your files–all, usually, with no real consequences from our intrepid regulatory authorities. So, the question then becomes: do you trust that tech leaders can and will self-police with tools as powerful as these?

  • Beware of these emerging cyberthreats in 2024

    The global cost of a data breach last year was $4.45 million. This is an increase of 15% in three years. As we step into 2024, it's critical to be aware of emerging technology threats--threats that could potentially disrupt and harm your business. Technology is evolving at a rapid pace. It’s bringing new opportunities and challenges for businesses and individuals alike. Rapid developments in artificial intelligence (AI), machine learning (ML), and quantum computing are leading companies across all industries to radically reconsider their approach to cybersecurity and systems management. While these technologies are poised to make our lives easier, they're also being used to launch sophisticated, large-scale attacks against the networks and devices we depend on. In this article, we’ll highlight some emerging technology threats to be aware of in 2024 and beyond. Data Poisoning Attacks Data poisoning involves corrupting datasets used to train AI models. By injecting malicious data, attackers can skew algorithms' outcomes. This could lead to incorrect decisions in critical sectors like healthcare or finance. Some actions are vital in countering this insidious threat. These include protecting training data integrity and implementing robust validation mechanisms. Businesses should use AI-generated data cautiously. It should be heavily augmented by human intelligence and data from other sources. 5G Network Vulnerabilities The widespread adoption of 5G technology introduces new attack surfaces. With an increased number of connected devices, the attack vector broadens. IoT devices, reliant on 5G networks, might become targets for cyberattacks. Securing these devices and implementing strong network protocols is imperative. Especially to prevent large-scale attacks. Ensure your business has a robust mobile device management strategy. Mobile is taking over much of the workload Organizations should properly track and manage how these devices access business data. Quantum Computing Vulnerabilities Quantum computing, the herald of unprecedented computational power, also poses a threat. Its immense processing capabilities could crack currently secure encryption methods. Hackers might exploit this power to access sensitive data. This emphasizes the need for quantum-resistant encryption techniques to safeguard digital information. Artificial Intelligence (AI) Manipulation AI, while transformative, can (and is) being used to facilitate the spread of misinformation. Cyber criminals are already creating convincing deepfakes with AI, and automating phishing attacks. Vigilance is essential as AI-driven threats become more sophisticated. It demands robust detection mechanisms to discern genuine from malicious AI-generated content. Regulatory bodies and watchdog groups have proposed mandatory watermarks for AI generated content to make it easily discernible from human-generated (or human-reviewed) content. Augmented Reality (AR) and Virtual Reality (VR) Exploits AR and VR technologies offer immersive experiences. But they also present new vulnerabilities. Cybercriminals might exploit these platforms to deceive users, leading to real-world consequences. Ensuring the security of AR and VR applications is crucial. Especially to prevent user manipulation and privacy breaches. This is very true in sectors like gaming, education, and healthcare. Ransomware Evolves Ransomware attacks have evolved beyond simple data encryption. Threat actors now use double extortion tactics. They steal sensitive data before encrypting files. If victims refuse to pay, hackers leak or sell this data, causing reputational damage. Some defenses against this evolved ransomware threat include: Robust backup solutions Regular cybersecurity training Proactive threat hunting Supply Chain Attacks Persist Supply chain attacks remain a persistent threat. Cybercriminals infiltrate third-party vendors or software providers to compromise larger targets. Strengthening supply chain cybersecurity is critical in preventing cascading cyber incidents. Businesses can do this through rigorous vendor assessments, multi-factor authentication, and continuous monitoring. Biometric Data Vulnerability Biometric authentication methods, such as fingerprint or facial recognition, are becoming commonplace. But users can't change biometric data once compromised, like they can passwords. Protect biometric data through secure encryption. Ensure that service providers follow strict privacy regulations. These are paramount to preventing identity theft and fraud. Advanced Phishing Attacks Phishing attacks are one of the oldest and most common forms of cyberattacks. These attacks are becoming more sophisticated and targeted thanks to AI. For example, hackers customize spear phishing attacks to a specific individual or organization. Hackers do this based on online personal or professional information. Another example is vishing attacks. These use voice calls or voice assistants to impersonate legitimate entities, convincingly persuading victims to take certain actions. Ongoing employee phishing training is vital, as well as automated solutions to detect and defend against phishing threats. At Geeks for Business, we believe that a proactive approach to cybersecurity is critical. With our trusted cybersecurity partner, Huntress, we are able to hunt for threats within networks before they become breaches. With complexity in cyberattacks rising, reacting to an attack just isn't enough; our 24/7 managed endpoint detection and response approach allows us to go on the offense against prospective cyber criminals. Tips for Defending Against These Threats As technology evolves, so do the threats that we face. Thus, it’s important to be vigilant and proactive. Here are some tips that can help: Educate yourself and others about the latest technology threats. Use strong passwords and multi-factor authentication for all online accounts. Update your software and devices regularly to fix any security vulnerabilities. Avoid clicking on suspicious links or attachments in emails or messages. Verify the identity and legitimacy of any callers or senders. Do this before providing any information or taking any actions. Back up your data regularly to prevent data loss in case of a cyberattack. Invest in a reliable cyber insurance policy. One that covers your specific needs and risks. Report any suspicious or malicious activity to the relevant authorities. Need Help Ensuring Your Cybersecurity is Ready for 2024? Last year’s solutions might not be enough to protect against this year’s threats.  Don’t leave your security at risk. We help small and medium businesses throughout Central North Carolina manage their IT, reduce costs and complexity, expose vulnerabilities, and secure critical business assets. Reach out to Geeks for Business today to schedule a chat. Article used with permission from The Technology Press.

  • 8GB of RAM Just Isn't Enough

    When a company with the resources and reach of Apple still sells a base configuration MacBook with 8GB of RAM and 256GB of disk space in 2023, something has gone wrong. This isn’t to say other laptop makers aren’t guilty of the same: Lenovo, Dell, and HP, the Big Three names in Windows-based laptops, all offer laptop computers configured with 8GB of non-upgradable RAM. This practice is demonstrably harmful to prospective customers who aren’t technically savvy; in 2023, an instance of Google Chrome with 15 or 20 tabs open, plus the demands of Windows itself, and any other apps you might be running at the same time, can easily swallow up 8GB of memory. Even if you’re a light user and never have very many browser tabs open at once, consider the implications of soldered, non-upgradable RAM: you’re stuck with this configuration for the life of the computer. If your needs ever change, or your workload requires more memory, you’ll suddenly find yourself in the market for a new laptop. Apple is perhaps the most flagrant offender of all: a brand new M2-based 13” MacBook Air starts at $1099, and configuring this laptop with 16GB of RAM (which you should consider the bare minimum if you intend to keep the laptop for several years) increases the price to an astonishing $1299–that’s a $200 upcharge for another 8GB of RAM (market price for an 8GB DDR5 memory module, meanwhile, is approximately $23). Yes, Apple has to make a profit margin. Yes, Apple’s memory architecture is different from that of traditional x86-based laptops. Yes, Apple’s memory is “Unified Memory” and is super fast, but so what? Fast memory won’t let you run more programs, work with more resource-hungry software, or allow you to open more tabs in Chrome or Safari. Worse yet, if you want more storage space on your MacBook Air, it’s another $200. Suddenly, you’re considering a $1500 laptop, and you’ve spent $400 to ‘upgrade’ to a RAM and storage specification that has been standard among Windows laptops for several years, at lower prices. But if you want macOS, there are no alternatives: Apple makes the hardware, Apple makes the software, and Apple sets the prices. And there is truth in the idea that Apple users are loyal to Apple, that they expect a premium experience and so are willing to pay a premium price. Apple’s ecosystem and integration among its own products is second to none. All of these ease of use features and other Apple niceties have an intangible value; maybe $1500 for what many would now consider a ‘base’ spec laptop isn’t a bad deal, after all. This may all be true, but Apple’s dogged adherence to offering 8GB of RAM as standard in any laptop is absurd. I’m left to wonder how many college students, perhaps pursuing their degrees in computer science or engineering, unpacked their brand new MacBook Airs and were faced with dreaded Memory Pressure warnings from macOS upon loading up all of their applications. Even for office workers who deal mainly in cloud apps, 8GB of RAM can show its limitations quickly. Although Apple’s new memory and storage systems are both super fast, constantly writing to disk when out of RAM can shorten that disk’s lifespan considerably. These are fundamental concepts of computer science that Apple must surely be aware of. After all, Apple is selling a premium product at a premium price. Why offer an entry-level hardware configuration that reads like the spec sheet of a 2013 Windows PC? Not to mention the fact that maintaining two different product lines for the 8GB and 16GB system boards that go into new MacBook Airs must be an additional expense for Apple that they could otherwise eliminate. For a company as focused on the user experience rather than getting hung up on technical specs, Apple sure seems to be going out of its way to ignore the technical specs altogether. To this, Apple might say, “if you need more than 8GB of RAM, you’re a pro user and should buy the MacBook Pro, which is configurable with up to 96GB of RAM”. Certainly, you can buy a MacBook Pro with 96GB of RAM for a cool $4300. But let’s zoom out: just because someone needs 16GB of memory, does that make them a ‘pro’ user? Apple offers its M2 MacBook Airs with up to 24GB of RAM. This seems rather intentional to gently guide users into the loving embrace of pricey MacBook Pro 14s and 16s with pricey 32GB RAM configurations. Needing 16GB of memory doesn’t immediately imply that you’re a professional video or photo editor, or that you’re working in complex financial or scientific software with massive datasets and huge RAM requirements. Maybe you’re just a modern user with modern hardware needs. Is Apple this bad at reading the market? While Apple generally makes fantastic hardware with fabulously sleek integrations with other Apple devices, their ostensible perspective that obsessing over hardware specs is gauche or unseemly does their users a great disservice. In the modern era of JavaScript-heavy websites and memory-hungry applications, 8GB of RAM just doesn’t cut it for most people, whether they immediately realize it or not. Even if memory in MacBooks could still be user-upgraded, most users wouldn’t attempt the upgrade. Most people aren’t extremely technical; in technical circles, like those I travel in, it’s easy to start to imagine that anyone could or would want to take apart their computer and upgrade RAM or install a higher capacity SSD. In reality, the average person can’t be bothered. And who can blame them? Apple is losing touch with its core audience if it fails to evolve its MacBook configurations with the times. Seemingly gone are the days of Apple’s bold hardware adventures, extremely competitive hardware specifications, and focus on power users. This is the attitude that earned Apple much of the success they enjoy today. Now, Apple seems like a company coasting on mostly past, and the occasional current, innovation. Despite Apple’s deft concealment of its devices’ technical underpinnings from its users, those technical underpinnings still exist and still matter. The average user probably has no idea how much memory or storage capacity they need. They rely on Apple (or Apple’s Geniuses) to tell them what’s best. In spite of ubiquitous cloud storage options, local storage demands are still growing, as are memory requirements; surely, a company with the technical and engineering prowess of Apple can understand the importance of these things. To put it simply: 8GB of RAM in a premium product with a premium price tag is a contradiction. It’s akin to buying a Rolls-Royce Phantom with a 4-cylinder engine and a 5 gallon gas tank. Even though Apple customers and Rolls-Royce customers probably don’t spend much time gawking at spec sheets, the specs still matter. Sadly, across the board, upgradable memory in laptops is a phenomenon on the decline. Laptop makers’ claims that people just want thinner and thinner machines, at the expense of all else (a claim those companies are loath to prove), have led to the adoption of soldered RAM in an effort to slim down devices. But, in reality, such claims are likely nothing more than marketing ephemera–a gambit to upsell people into higher specced laptops at premium prices. In our opinion, this is more than just a sales tactic used by a stagnating industry to increase revenues–it’s another salvo in the war on Right to Repair and the very notion of actually owning what you buy.

  • The day the world went away

    – COVID exposed the frailty of just-in-time supply chains, and things aren’t getting better Since 2020, procurement in the tech sector has steadily become more difficult. Corporate purchasers and consumer buyers alike continue to grapple with both finished product and component shortages. Promises that broken supply chains fomented by the pandemic would be remedied by now have gone unfulfilled. Why hasn’t that product been released yet? Why are application-specific processors still so hard to find? Why are critical components needed for networking, industrial computing, IoT, and edge computing missing in action? The supply chain crisis was never really fixed and mainstream publications have backed away from covering the issue. During the first quarter of 2020, as lockdowns and quarantines began, consumer technology devices like laptops and webcams saw a massive surge in demand. This sudden surge in demand shocked our “just-in-time” supply chain model; that is, inventory is only moved as demand forecasts dictate. This logistical model saves retailers, warehouses, and manufacturers on storage costs, but eventuates supply-side bottlenecks when a supply or demand shock hits the system. Throughout 2021 and 2022, supply chain “disruptions” continued to ripple through the economy, punctuating just how fragile our just-in-time paradigm is. Data science has made just-in-time possible: advanced demand forecasting, automated order management, sophisticated consumer behavior analysis have not only enabled just-in-time logistics, but made it the de facto standard for the industrialized world. All the while, critical component shortages persisted, in spite of companies’ best PR efforts to hand-wave the mounting problems away. The Raspberry Pi Trading Company, famous for the wildly popular Raspberry Pi single-board computer, is still struggling to increase its manufacturing output in order to meet demand for its hardware. The issue: a shortage in one component needed to manufacture a Raspberry Pi brings production to a halt. WiFi chips, Bluetooth radios, USB-C ports, voltage regulators, rectifiers, Ethernet PHYs are all necessary to make a Raspberry Pi 4, for example. If WiFi chips are in short supply, the boards can’t be finished. Car manufacturers encountered a similar dilemma, with Ford unable to send finished vehicles to dealers due to the widely-publicized “chip” shortage. Untold thousands of vehicles sat idle while carmakers waited for the supply chain to unsnarl itself. Other cars were shipped to dealers, missing highly-touted “smart” features like Apple CarPlay; if Ford, GM, and others had suspended sales of every vehicle that didn’t have every chip it needed, they would have had to contend with massive financial losses. As demand for pickup trucks and luxury vehicles increased, car-buyers were faced with purchasing an unfinished product, all thanks to the just-in-time supply chain. The snarled supply chain likewise drove lumber prices through the roof in 2021, a price shock from which the market still hasn’t fully recovered. Coupled with very hot inflation rates, sticker shock across a broad swath of industries became normalized. Just-in-time was still broken, even as the pandemic receded and government-imposed economic restrictions were lifted. By early 2022, the true scope of our supply chain wreckage was coming into view. With the headwinds of inflation, personal debt, bankruptcy, and business failures during COVID, the back of the supply chain had been broken. Shortages were going to improve, we were assured, as things got back to normal. “As things get back to normal”, a soothing refrain for a world shaken by a very real global supply chain failure. Now, in the latter half of 2023, just when things will return to normal remains unknown. As those who work in corporate IT procurement will attest, things have not gotten better. Hardware launches have been postponed while prices have risen dramatically in both end-user products such as laptops and in components. Once-commonplace corporate IT devices, like managed Ethernet switches, have seen persistent shortages while manufacturers of those devices have had to scramble to substitute new chips for the old chips they can’t easily source anymore. These chip substitutions constitute a problem all their own–due to the availability of component-level hardware, like NAND flash controllers, solid state disks have been especially vulnerable to unannounced hardware “updates”, which usually lead to reduced performance and reliability. As unscrupulous Asian manufacturers seize this opportunity to flood the market with low quality knock off chips and other hardware, companies around the world are being forced to accept a new reality where the old supply chain is gone and isn’t coming back. Where we used to be able to promise a reliable product at a certain price, we now have to accept substitute goods that may not meet the performance, feature, or reliability characteristics of the original product. Worse yet, the replacement product may not meet the clients’ needs, either. The sudden shock to our tightly-wound supply chain has given rise to a cascading failure of epic proportions. We are in uncharted territory, with no historical precedent or context to inform our next move. The biggest tech companies like Apple and Amazon have weathered the storm, relatively unscathed, owing to their massive liquidity. For the smaller (but still big) players in tech, the news isn’t so rosy. For the smallest players, such as managed service providers, small software developers, game studios, and system integrators, the state of affairs is bleak. A company with Amazon’s scale can simply tell a Chinese factory, “I’ll pay you $100 million to manufacture these chips, to my specifications” and the factory will take their money, throw out every other bid, and make the chips. The biggest players get first pick of the world’s limited high-tech manufacturing capacity. Everyone else gets to fight over the scraps. In chipmaking, bringing a new factory online can cost billions of dollars, take years to complete, and typically requires huge amounts of raw materials, including fresh water. Intel can speak to the difficulty and expense in semiconductor fabrication, as their recent missteps have shown. Of course, this assumes a complex chip design (such as the x86-based processors most of us rely on in our own computers); a simpler chip design, such as ARM or RISC, is less costly to manufacture and can be reworked to efficiently suit a wider range of applications more easily than x86. Even so, manufacturing enough ARM SoCs (systems-on-chip) for their single-board computers has also proven difficult for The Raspberry Pi Trading Company. There is no safe haven in the world of just-in-time logistics. It is possible, or perhaps even likely, that the deterioration of our gossamer supply chain will lead to a resurgence in local and more resilient manufacturing. If we have any hope of sustaining ourselves, we have to move the point of manufacture closer to its audience. Shipping a laptop battery from China to Maine, for example, is just demonstrably stupid. Growing populations and growing demand for technology will only shine a more unkind light on our current way of moving inventory around the world. Maybe just-in-time is peak capitalism and we needed to see it destroyed before our very eyes to appreciate just what’s so wrong with it. In a world that is both tech-obsessed and contending with serious environmental and resource-related challenges, reducing both the complexity of manufacturing and the impact of manufacturing are only net positives.

  • Algorithm-directed content has made reality a hall of mirrors

    How much content is organic? YouTube content creators sometimes break their own fourth-wall by admitting (often for comic relief) that the algorithm made them do it. YouTube’s algorithm is poorly understood, perhaps even by the Google engineers who develop it; the average YouTube viewer has no clue how the algorithm works, or why they see recommendations for videos that seemingly have nothing to do with their watch histories. To the outside observer, the algorithm is mostly a black box. Content creators, however, have at least some insight into what the algorithm is doing: by analyzing the performance metrics of their past video uploads, they can start to infer which kinds of content resonate with the algorithm. One might say, “the algorithm isn’t to blame–viewers decide what they like and don’t like”, but this misses so much of what the algorithm itself is doing, behind the scenes. In a 2018 interview, software engineer Guillaume Chaslot spoke on his time at YouTube, telling The Guardian, “YouTube is something that looks like reality, but it is distorted to make you spend more time online. The recommendation algorithm is not optimizing for what is truthful, or balanced, or healthy for democracy.” Chaslot explained that YouTube’s algorithm is always in flux, with different inputs given different weights over time. “Watch time was the priority. Everything else was considered a distraction”, he told The Guardian. YouTube’s algorithm behaves in ways that appear outwardly bizarre: recommending videos from channels with 4 subscribers, videos with zero views, or videos from spam and affiliate link clickfarm channels. To an uninitiated third-party, YouTube’s algorithm seems a bit obtuse, unpredictable, mercurial; perhaps the algorithm is working as intended. The algorithm, not the audience, increasingly directs the kinds of content that uploaders make. This is assuming, of course, that the uploader wants to monetize their content and collect ad revenue from high-performing videos. A channel you subscribe to that normally uploads home repair videos may decide to upload a new kind of video, where the content creator travels across the country to interview a master carpenter about his trade. This video, while very well-received among the channel’s audience, has a fraction of the total watch hours and much less “engagement” than usual. This results in lower ad revenue for the creator, but also results in the video being shown to fewer people outside the channel’s existing audience. The content was good, but the reception was poor. Was the reception poor because people just decided the video was a bit too long? Not relevant enough to home repair? Was Mercury perhaps in retrograde? No one really knows, even the channel owner. The algorithm has decided, based on its unknowable metrics, that this video is a bad video, and it won’t be going out of its way to promote it on the front page of YouTube. As a result, that content creator, despite his love of the video he made and the subject matter, will mothball his plans to create more videos in his “Interviews with a Master Carpenter” series because the money just isn’t there. Conceivably, the first video performed poorly just by virtue of its newness, not due to any intrinsic flaw in the content. Maybe subsequent videos in the series would have done better. It doesn’t matter, though, because the algorithm has all but whispered to the channel owner, “don’t make this kind of video again”. Consequently, the content creator returns to his normal fare and watch hours, views, likes, and “engagement” go back up. On a broad scale, this behavior would seem to have a chilling effect on speech itself. Algorithms and machine learning models are playing an outsize role in what kinds of content people make and what kinds of things we see. Is important work being taken out back and shot because the algorithm has concluded, based on historical data, that it won’t perform well? Or is the algorithm, even by chance, ensuring that videos it thinks are overly critical of a brand or company, or just generally problematic, won’t be seen again? Artificial intelligence doesn’t have the ability to be conscientious–it lacks self-awareness. All the same, human provocateurs can and do put their desires and agendas into these algorithms, giving AI the power to selectively dismiss and bury content that it’s been instructed not to like. It’s important not to conflate this weaponization of AI with simple clickbait. While clickbait is problematic in its own right, it serves as more of an incentive for publishing certain kinds of content, rather than disincentivizing the creator from making certain kinds of content. And make no mistake: algorithmic activism is weaponization of artificial intelligence. As we fall further down the trash-compactor-chute of tech dystopia, we need to remember that the same companies who wield AI as a cudgel against certain content were the same companies who heralded AI as a net positive to humanity. Google’s former motto was “don’t be evil”--they’ve since removed that utterance from their corporate code of conduct, as if Google needed to make it any more obvious that they are evil. Regardless, there is no “if” AI becomes a weapon because it already is one. The question is how bad will we, and our elected representatives, allow things to get before we place hard limits on AI’s scope in the public domain. Print media hasn’t escaped the problem, either. Print, if anything, is a vestigial organ. Print media follows whatever is in fashion in digital media, as it may not surprise you to learn. So, when you read your local newspaper and ask, “why is every story about a shooting or a new tech company promising to hire a whole bunch of people?”, remember the algorithm. The same algorithm that tells YouTube content creators to be very careful about publishing unpopular material. The algorithm itself defines popularity, so you’ll never really know if the videos it deep-sixes are resonating with people or not; there are no guardrails, no systems of checks-and-balances, and you can’t interrogate the Black Box. When we factor in the use of AI to write articles, product reviews, and other content, we find we really do exist in an algorithmic hall of mirrors. How can we discern which videos or articles to trust if the entire game is rigged? The quest tech companies have embarked on with AI is fairly straightforward: do whatever is necessary to beat the opponent’s AI model into submission. Then, the AI that emerges victorious gets to decide reality for the rest of us. Consider deepfakes–what if, in the future, an anti-establishment YouTube creator is framed for murder using an AI-generated confession that the man himself never uttered? This isn’t a cheesy Blade Runner-style screenplay pitch, it’s already happening. Deepfakes have already necessitated using AI to sniff out phony and doctored content. The Massachusetts Institute of Technology is currently developing a deepfake detection experiment, which has already been peer-reviewed by the Proceedings of the National Academy of Sciences of the United States of America. The new space race between large-language models in AI will only intensify, and our reality will be dragged along with it. Truepic has also developed a system to authenticate videos and images, aiming to guarantee the authenticity of digital master copies, thereby preventing the fraud and abuse that deepfakes engender. Deepfakes also promise to make corporate espionage easier, and cyberattacks harder to prevent, thanks to the sophisticated phishing attacks the technology facilitates. Because of AI-generated deepfakes, requirements for companies to obtain cybersecurity insurance will no doubt become more strigent. Cybersecurity insurers already require out-of-band-authentication (OOBA) as a defense against deepfake or impersonation-based attacks against clients; these authentication strategies are only one piece of the puzzle in mitigating these emerging deepfake threats, however. Additional software tools, authentication factors, user training, and the use of advanced AI technologies will become a necessary component in enterprises protecting their employees and clients from malevolent AI attacks. The aim of this post isn’t so much to make you doubt your experiences, but rather to encourage you to ask some pointed questions about the things you read and watch online. The algorithms that run social media will only grow more sophisticated, and so, too, must our responses. If we don’t treat artificial intelligence with due caution and subject it to much-needed scientific and legislative rigor, we will find ourselves in a frightening new reality that we won’t be able to escape.

  • Outsmarting your smart devices

    The Internet of Things, a growing constellation of so-called “smart” devices such as doorbells, Internet-connected cameras, and smoke detectors, has long been criticized for its almost total absence of security. Smart device makers like TP-Link, Amazon, Google, and Wyze are no strangers to controversy when it comes to the sorry state of IoT security, yet the U.S. government hasn’t done much to enact legislation to protect the buyers of these devices. In 2025, the IoT device market is projected to bring in nearly $31 billion dollars in revenue for smart device manufacturers. As the market grows, so do opportunities for malicious actors to exploit weaknesses in IoT devices’ firmware, software, and the cloud infrastructure that lets users conveniently manage those devices from their PCs and smartphones. The statistics are grim: between January and June 2021, Kaspersky estimated that approximately 1.5 billion security breaches were carried out against these Internet-connected smart devices. Kaspersky further found that approximately 872 million, or 58% of the total number of breaches, were carried out with the intent to mine cryptocurrency on those devices. While each IoT device has fairly miniscule processing power, a network of a billion or more of these devices mining cryptocurrency or spreading malware is a formidable threat. While compromised smart home devices can expose sensitive data and cost consumers money, a much more critical threat exists in the medical IoT sector. In early 2022, Cynerio found that 53% of medical devices have a known critical vulnerability. These vulnerabilities often go unpatched by the medical device manufacturers and pose a serious threat to patient health and safety. Questions are also raised about an insurer’s willingness to cover a hospital or medical facility that implements devices which contain these vulnerabilities, unless mitigating actions are taken. Cynerio’s January 2022 report on the state of the medical Internet of Things also found that a majority of devices used in medicine (pharmacology, oncology, laboratory) are running versions of Windows older than Windows 10. This includes medical devices running versions of Windows as old as Windows XP, released in 2001. Microsoft ended all support for Windows XP in April 2014, but a significant number of expensive medical devices, such as X-rays, MRIs, and CAT scan machines, still rely on computers running the now 22-year-old operating system. Research by Palo Alto Networks in March 2020 found that 83% of these devices rely on unsupported operating systems, such as Windows XP and Windows 7. Hospitals are usually reticent to upgrade, even if unsupported software puts their patients at risk or jeopardizes their HIPAA compliance, because upgrading operating systems may mean upgrading expensive hardware. Given skyrocketing costs for the patient and the provider, hospitals find themselves in the unenviable position of having to choose between painting an ever-larger security target on their backs or spending millions of dollars on hardware and software upgrades. Unfortunately for hospitals and other medical facilities, refusing to upgrade can mean serious fines imposed by the federal government. Just last month, the Department of Health and Human Services reached a $75,000 settlement with Kentucky-based iHealth Solutions, provider of software and services for the medical sector, for violating HIPAA security and privacy laws. HHS determined that iHealth Solutions did not disclose or remediate weaknesses within its own network, leading to a data breach and release of patient records in 2017. For the healthcare industry, in particular, network security and compliance have become particularly thorny issues, given the requirements that federal and state laws set forth for the transmission and storage of patient data. The specter of compromised medical devices only adds to the pressure on hospitals to employ best security and networking practices, lock down devices, and deploy software and hardware from known vendors with a track record of supporting their products. For non-medical industries, the stakes may be lower but the importance of data security should still be top-of-mind for business owners. The Internet of Things is rapidly evolving and represents substantial added value to businesses who want to harness data to make better decisions; the devices we allow on our networks, however, must be managed and monitored to ensure they don’t become a liability. IoT security isn’t going to become easier as the segment grows, in our opinion. More devices on networks means more potential for exploits and theft of sensitive information. When we talk about data breaches in 2023, we don’t ask “if”, but “when”. This probably isn’t the optimistic take that business owners and IT managers want to hear, but threats are only becoming more complex. Data security is hard and requires an ongoing effort from your IT provider, as well as an ongoing commitment from you, the business owner–anything less can expose you to identity theft, fraud, regulatory fines, or even your business. Geek Housecalls and Geeks for Business offers free security consultations for home and business. Get in touch today for your free consultation and a detailed IT plan, tailored to your unique needs.

  • Managed storage as a defense against ransomware

    It’s getting harder to trust cloud storage providers --- In the wake of revelations that Western Digital’s internal IT resources were compromised by a malicious third-party hacker group, serious questions about the security and reliability of cloud storage services have again been raised. This isn’t the first time a cloud storage provider has been compromised, either; in 2015, health insurance provider Anthem experienced a massive data breach which exposed the personally-identifying information (PII) of around 80 million of its customers. In 2020, Anthem struck a deal with a group of State Attorneys General to pay a $39.5 million settlement relating to the hack. Anthem denied any wrongdoing. We’ve discussed data breaches in this blog before, but the frequency and severity with which they now occur has encouraged us to impress upon our clients the importance of managed storage services. Whether you pay for end-user cloud storage services like iCloud or Google Drive, your sensitive information is still in the cloud. Healthcare information, browser search histories, call logs, telemetry from smart devices and other medical devices, financial information, and more are all stored somewhere. Is that data encrypted? Are the companies who initially gathered your data even still in business? If they were acquired by another company, did the parent company inform you what they planned to do with your data? Has your data been sold to third parties without your knowledge or consent? These massive data troves, often poorly secured and subject to nebulous or nonexistent regulation, are valuable to both “legitimate” outfits like advertisers as well as nefarious entities, like black-hat hacking groups. Some of this data can’t easily be managed; when you consent to use a product or service, you’re entrusting your information to a corporate entity, or entities, who probably don’t have your best interests in mind. Even Microsoft sells your data. To a degree, you’re at the mercy of these tech giants’ End User License Agreements; these days, it’s all but impossible to go “off the grid” as far as data gathering is concerned. However, while the wheels of legislation may turn slowly, you can limit the amount of data you release to third-parties, by controlling much of the data that is personally significant to you (like family photos, videos, music, documents, and so on). This is where managed storage comes in. In spite of enticing promotional offers and rock-bottom pricing, it bears repeating that the cloud is just someone else’s computer. By using any cloud storage service, no matter the money and corporate pedigree behind it, you’re assuming that the provider has done their due diligence in securing your data. Surely, we can trust the likes of Google, Apple, and Microsoft to understand fundamental concepts of data security, right? Well, can we? It’s important to understand that services like iCloud and Google Drive are inexpensive for a reason. This isn’t to say that you’re getting a less secure product if you pay less money, rather that the services themselves are constantly in flux; Google might offer you 50GB of cloud storage for $1 per month, with a feature you like or use frequently (like integration with your Windows file system). From one month to the next, Google might decide that this feature you like is too costly or complex for them to continue supporting. A while later, that feature is gone. You’re still paying the same (or more) for the service, but the product has changed. Are you still getting what you paid for? To reiterate: you are at the mercy of the companies who manage these cloud storage platforms. This would be less of a concern if no one ever stored any sensitive data on these services, but people do just that–constantly. People also back up this sensitive data to one service and keep it nowhere else, placing total faith in the reliability and privacy of the cloud storage service they’ve chosen. This is a dangerous gamble, to say the least. No matter the scale or the cost, all cloud providers eventually suffer a failure. The failure may be catastrophic. You might lose everything. You probably won’t–historically, the largest cloud storage providers have been pretty good about keeping user data intact. But is this a bet you want to make, day in and day out, with irreplaceable data? Beyond just availability, do you trust your cloud provider to secure that data? Lost data is usually less devastating than stolen data, depending on what kind of data is being stored on a cloud storage service. To that end, Geek Housecalls/Geeks for Business is introducing our managed data service. While we have worked closely with preeminent cloud storage provider, Backblaze, for helping clients streamline their data backup solutions, we realize they are, for better or worse, another cloud storage service. We mean no disparagement toward Backblaze (because they’re great at what they do), but cloud storage just isn’t a bulletproof solution to store and secure the rapidly growing amounts of data that the average person generates. As such, we’d like to shine a light on the importance of onsite data storage: our managed data product involves an onsite hardware appliance (known as a Network Attached Storage server) and automation routines written by us, to ensure that your data is backed up on your schedule, securely. Envision a central storage device that pulls in data from each client device on your network (laptops, desktops, phones, smart devices such as security cameras). Now, envision a device that does this without user intervention or configuration. This is our “value-add”; we design and implement both the hardware and software, utilizing industry best practices and hardware from Synology. For business clients with more involved data storage requirements, we also work with TrueNAS Enterprise for custom storage server solutions. Network Attached Storage (NAS) has been around for decades, and serves an important purpose when we approach the thorny subjects of data security and data availability. This has held especially true for business use cases, but a logical data storage solution is increasingly critical in home environments, as well. NAS devices, such as those from Synology, are also highly extensible; your storage server can be extended with plugins for popular services like Apple AirPlay and Plex Media, in addition to allowing you to host local instances of password management software like BitWarden. Modern hardware has become fast and efficient enough to unlock a lot of possibilities here, all without needing to pay a cloud storage provider a monthly subscription fee. Most importantly, you retain control of your data, which is something no cloud provider can honestly claim. And finally, we don’t want to make it seem like the big cloud storage providers are bad or can’t be trusted–just that they shouldn’t be your first or only choice for data management. If your storage needs are greater than 50 or 100GB, costs can add up quickly. And it doesn’t really ever make sense to pay a princely sum to host “bulk data” (movies, TV shows, games, music) in the cloud, to begin with. Bulk data storage is much more economical when done on premises (your home or business) and gives you faster and more secure access to that data, as well. We regard solutions like Microsoft OneDrive, Google Drive, and Apple iCloud as good backups for your backups. That is, you should have multiple copies of important files, and those files can be stored in the cloud. We do strongly recommend encrypting important files, just in case your cloud provider is compromised, however. In the realm of data storage, the adage is “two is one and one is none” with respect to how many copies of your data you really need. Our recommendation is that our customers should have an onsite data backup, an offsite backup that only connects to the Internet to synchronize data with the onsite copy, as well as at least one cloud backup (iCloud, Google Drive, Backblaze, OneDrive, etc.) Additionally, cloud services like Google Drive can easily be integrated with onsite backup solutions. If you’d like to have all of your tax documents, for instance, be backed up to the cloud as soon as they’re backed up to your local NAS, that’s easily accomplished thanks to the extensibility of modern storage servers. One substantial benefit to having secure, managed backups is that, in the event of a ransomware or other malware attack, you have the ability to roll your systems back to known-good configurations, bypassing the need for expensive, complex malware remediation. Ransomware can’t thrive in an environment where secure data backups exist. This is something that Microsoft claims to offer with its OneDrive product, but such a claim is optimistic at best. Data snapshots, versioning, and offline backups go far beyond the scope of what OneDrive or any other consumer-grade cloud backup solution can hope to offer. Data management is only going to become more complex, and we think it’s time that everyone took control of their data destiny. Give Geek Housecalls and Geeks for Business a call (or email) today to set up a consultation for your storage and security needs.

  • Your laptop’s advertised battery life is a lie

    — Windows laptop manufacturers have dramatically overstated battery runtimes, for years With the sole exception of Apple, every single laptop manufacturer has lied, consistently, about their laptops’ battery life. As a study from Digital Trends found in 2017, Apple actually overstated the battery runtime of their MacBook Air 13”, claiming a 10-hour battery life while Digital Trends’ testing found the MacBook ran for 12 hours. In this instance, Apple is the only winner. Every manufacturer of Windows laptops was found to have overstated (if we’re being generous) or lied about (if we’re being honest) battery life. Dell was found to have overstated their battery life figures to the tune of 4 hours, while HP overstated battery runtimes by almost 5 hours. When probed for comment on these vast discrepancies, Dell told Digital Trends: “It’s difficult to give a specific battery life expectation that will directly correlate to all customer usage behaviors because every individual uses their PC differently,” Dell’s statement is a cop-out, at best; every Windows laptop manufacturer will no doubt provide a variation of the very same statement Dell gave on the subject of battery life. We can say, authoritatively, that their response to Digital Trends is a complete misdirection. What Dell fails to acknowledge is that battery life tests are generally synthetic and not based on real-world workloads (such as web browsing, video editing, or software development). Let’s dig into what exactly these synthetic battery life tests are and why they are absolutely worthless for providing any useful insight into a device’s real battery life. UL Solutions (Underwriters Laboratories), publisher of various computer benchmarking applications, offers what they describe as a standardized battery life test within their PCMark10 benchmark suite. The battery life testing portion of PCMark10 tests laptop battery life under three synthetic workloads: video, modern office, and gaming. Each workload test also generates a power usage report which indicates the wattage used by a given laptop under each synthetic workload. While this standard theoretically allows for independent reviewers and media outlets to test a laptop’s battery life and have directly comparable results to other laptops tested with PCMark10, the reality is murkier. Owing to battery runtime losses due to battery degradation, changes in runtimes due to firmware and operating system updates, as well as possible battery life fluctuations between operating system versions (eg: Windows 10 versus Windows 11), maintaining a database of reliable battery life numbers becomes a daunting task. This is made worse by the sheer, mind-boggling number of Windows laptops available on the market at any given time. To further complicate matters, some manufacturers sell the same laptop with smaller and larger battery capacity options. Lenovo, for example, offers two variants of their Thinkpad X13 Gen 3 laptop: one with a 41Wh (watt-hour) battery and one with a 54.7Wh battery. Naturally, the variant with the 54.7Wh battery will produce longer battery runtimes thanks to its larger capacity. The problem is Lenovo’s own website makes it difficult to know which version you’re buying; it’s usually assumed that preconfigured versions of Lenovo laptops with higher end specifications will also include a larger battery, if one is offered. Retailers likewise often fail to publish battery capacity information if none is made available by Lenovo, in this example. Other online computer review publications, like the seminal, have created their own battery life tests, which are not standardized and can’t be directly compared to battery runtime numbers produced by other reviewers. In Notebookcheck’s case, however, their own test results are consistent and methodologies are applicable to old laptops, reviewed years ago. Elsewhere on the Internet, one benchmark may specify a 150-nit brightness setting with WiFi disabled for their battery test, while another may specify a 200-nit brightness setting with WiFi enabled. The manner in which battery runtimes are tested is not at all consistent across manufacturers and review outlets. This maelstrom of non-standardized data allows the laptop makers themselves to claim that there is no truth and that everyone’s experience with battery life will be different. We don’t encounter laptop makers understating the battery life of their laptops–why is that? Better to under-promise and over-deliver, isn’t it? Of course, if Dell, HP, Asus, or Lenovo admitted the average person might only see 4 hours of battery life, they’d sell fewer laptops. Apple, especially now that it produces its own silicon, is in the enviable position of offering the best battery life in the industry. Windows laptop manufacturers have taken note of this and are willing to beg, borrow, and steal their way to comparable energy efficiency in their own machines. This, realistically, is impossible: Intel and AMD cannot compete with the efficiency of Apple’s in-house silicon (though AMD is currently still ahead of Intel in this regard). Much of the battery life delta between Windows (x86) and Apple (ARM) is down to architectural differences between x86 and ARM. x86 has existed since 1978 and, while it has grown in complexity since, has not been fundamentally overhauled to address inherent inefficiencies. The x86 architecture contains legacy instruction sets that just aren’t relevant to modern computing workloads. However, this legacy cruft still takes up space on the processor’s die and requires power, whether it’s used or not. This agglomeration of inefficiencies within x86 has made it harder and harder for processor manufacturers (in this case, Intel and AMD) to produce the kinds of efficiency gains that Apple did by jettisoning x86 altogether. This is why Apple can understate its battery life and no one else can. As more people find that they can do most, if not all, of their work using web apps and cross-platform apps, Windows starts to become less of a necessity. As a consequence, Windows-based laptops with poor battery life become less of a necessity. Kids in schools across the country have found that they can do their assignments just as easily, if not more easily, on Chromebooks running Google’s ChromeOS, than they can on Windows laptops. Software developers and creative professionals in photography and graphic design have long chosen Apple laptops for their reliability and more focused software offerings. Maybe it’s time for users of other stripes to reevaluate whether Windows and x86 are all that compelling, anymore. MacBooks are expensive, though perhaps less so than you imagined: the price of entry hovers around $800 for an entry-level MacBook Air with 8GB of RAM and 256GB of storage. This represents a perfectly adequate specification for casual users and office workers. While $800 is appreciably more expensive than the low-end, $300 Windows machines you might find at Walmart, the differences in battery life, performance, build quality, and overall polish justify spending more. A MacBook will yield a much longer usable lifespan than a budget Windows laptop, as well. Unless a breakthrough happens in x86’s design efficiency, custom ARM silicon (like Apple’s M1 and M2 chips) seem to point the way forward for mobile computing. Regardless of your feelings about Apple’s closed ecosystem and its spendy hardware, Apple does deliver on its battery life promises. Let the PC makers squabble over 5 and 6-hour real world battery life–you’ve got other options. If you’re in need of a new device or a new fleet of devices for your business, get in touch with Geek Housecalls today and we’ll recommend the best device for your use case, at competitive pricing.

  • Companies are turning their backs on the cloud

    Rising costs and complexity are fostering a cloud repatriation — As of early 2023, a new trend is emerging in corporate IT: companies are increasingly turning their backs on cloud-hosted infrastructure and returning, at least partially, to on-premises infrastructure. High cloud hosting bills and the complexity of managing cloud spending are placing an untenable load on smaller companies who don’t have the manpower or resources to babysit their cloud balance sheets. In a survey by FoundryCo, 40% of survey respondents cited the need to keep cloud spending in check as a roadblock to further use of cloud services. When a company’s IT department is juggling cloud services from multiple cloud providers, this task becomes a growing headache. FinOps systems are available to help companies manage their cloud spending, but buy-in for such a system necessitates more spending, perhaps more hiring, and additional complexity within the organization. For larger companies, the burden of integrating a new systems management paradigm is easier to defray; for smaller organizations, however, implementing a FinOps system for cloud spending management could overwhelm existing budgets. So what are SMBs (small and medium businesses) to do in this situation? Cloud repatriation. In the age of rising cloud computing prices, pulling back cloud resources to on-premises infrastructure can represent substantial cost savings. In a 2022 survey by Anodot, 50% of IT executives reported that “it’s difficult to get cloud costs under control”. Further, Anodot’s survey found that: “over a third of participants (37%) have been surprised by their cloud costs; more than half (53%) say the key challenge is gaining true visibility into cloud usage and costs, while 50% said complex cloud pricing and 49% said complex, multi-cloud environments; more than one quarter (28%) of respondents said it takes weeks or months to notice a spike in cloud costs, a figure that has not improved over 2021.” When we consider that the majority of the cloud compute and storage market is dominated by three major players, Amazon Web Services, Microsoft, and Google, it becomes less surprising that pricing is not as transparent as it could or should be. A 2022 study from HashiCorp found that 94% of companies are wasting money on cloud technologies. In its fourth-annual multicloud survey, Virtana, a multicloud management platform vendor, found that, among its 350 respondents, 69% said cloud storage accounted for more than one quarter of their total cloud costs, and 23% of respondents said cloud storage spending accounted for more than half of their total cloud costs. Overall, Virtana found that 94% of the IT leaders it surveyed reported rising cloud storage costs and 54% reported spending on cloud storage was growing faster than their overall cloud costs. In addition to concerns about spending and waste, additional concerns in the realms of information security and performance have also entered into the equation for many organizations. With the parade of high-profile data breaches, the largest of which all having occurred after 2010, CIOs and other IT executives are facing increasing operational headwinds when working to secure their companies’ resources. The burden on smaller organizations is disproportionately larger, owing to smaller budgets and slower reaction times to incidents like ransomware attacks and proprietary data leaks. IBM reported in its 2022 Transformation Index that 54% of respondents felt that the public cloud is not secure enough. The challenges facing organizations who use the public cloud are only mounting, as datasets grow larger and the complexity of managing disparate data silos increases in kind. In 2020, data integration vendor Matillion surveyed 200 IT professionals on the issue of data integration and found that 90% of respondents cited lack of data insights as a barrier to growth. Rapid data growth and, consequently, the monumental tasks of management, archival, and security of that data have led to what can only be described as a data crisis at some organizations. Further complicating matters at the physical data storage level itself is the arms race between data storage demands and currently available storage technology. With advanced storage methodologies like HAMR (heat-assisted magnetic recording) soon to hit the market, it may seem like we’re managing to keep up with demands of global data centers. The issue, however, is complexity. As hard drive densities increase, along with the complexity of the physical mechanisms needed to grow their densities, management becomes more difficult. Higher capacity hard drives require tighter tolerances, more advanced engineering techniques, more complex storage controllers, and are more prone to encountering unrecoverable errors during storage array rebuilds. Smaller organizations can glean some useful information on the relative merits of “lifting-and-shifting” their IT infrastructure to the cloud, versus keeping their data and compute on premises. For smaller companies, the calculus for shifting IT infrastructure to the cloud often tilts in favor of utilizing the public cloud in low-demand scenarios, such as identity management, hosted office applications, and VoIP. However, even in lower demand scenarios, inflation in cloud pricing has led to 50% of organizations exceeding their budget for cloud storage spending. In February 2023, Google announced price increases in its Google Workspace Flexible Plans and in Google Workspace Enterprise Standard, but also announced new flexible pricing options for organizations looking to migrate their data to Google’s cloud platform. While pricing trends in cloud computing seem to be downward, cloud storage pricing is moving in the opposite direction; demand for cloud data storage is skyrocketing, forecast to grow around 25% year-over-year through 2028. Higher costs are a function of both this explosive demand in the storage sector and in ongoing supply chain snarls that have driven up semiconductor prices worldwide. When this trend will reverse is not clear–high year-over-year price increases in cloud storage are likely here to stay for the foreseeable future. If your business is struggling to get cloud compute and storage costs under control, Geeks for Business can help. Let us analyze your current cloud spending, your business’s compute and storage needs, as well as your forecasted growth, and we can help streamline complex multicloud environments while reducing ongoing costs. Every organization’s needs are unique: on-premises infrastructure isn’t (yet) a cure-all for high operating costs. Get in touch with Geeks for Business today and let’s get to work on getting your IT costs under control.

  • Telcos, with the help of the FCC, are dropping copper and replacing it with nothing

    — In August 2022, the FCC handed down Order 19-72, titled “FCC Grants Relief From Outdated, Burdensome Phone Industry Regulations”. Order 19-72 empowers telecommunications companies to discontinue support for and maintenance of their existing copper-based communications networks in favor of broad fiber-based network upgrades. That legacy copper networks would be upgraded to modern fiber networks is a drum telecom giants like AT&T have been beating since the 1990s. In practice, however, these upgrades still haven’t materialized for millions of households–particularly those in rural areas–because of a deeply ingrained anti-competitive spirit within the telecom industry. The Telecommunications Act of 1996, which is in fact an amendment of the Communications Act of 1934, sought a path to force incumbent telecommunications companies to upgrade their networks. However, as is the case with most modern legislation passed in the United States, this particular bill was largely toothless. Lacking the political power or regulatory authority to make incumbent telcos like AT&T do much of anything, the Telecommunications Act went largely ignored. Over the ensuing decades, incumbent telecom companies like AT&T, GTE, Sprint, Verizon, and Bell South have wrung their hands, dragged their feet, litigated, lobbied, and done everything in their power to delay performing this copper-to-fiber network transition. Worse, the same telcos have for years let their existing copper infrastructure languish and decay, citing “prohibitive” maintenance and upkeep costs. In 2015, AT&T reluctantly struck a deal with the federal government to receive $428 million in subsidies to provide a minimum of 10Mbps downstream Internet service to rural areas of the country. AT&T claimed that it shouldn’t have to provide anything better than the FCC’s existing broadband standard of 4Mbps downstream, 1Mbps upstream. Considering AT&T has long enjoyed a virtual protected monopoly thanks to ongoing subsidies from the federal government and special regulatory protections from state governments, their intransigence on performing basic network maintenance and upgrades is especially vexing. After the 1982 dissolution of AT&T’s historic “Bell System” and the partitioning of the Bell system into RBOCs (regional bell operating companies), the so-called “Baby Bell” companies were slowly reabsorbed into large, national telecommunications companies, thereby establishing new monopolies and effectively rendering the original intent of the 1982 Bell breakup completely moot. In a truly absurd show of the U.S. government’s apparent lack of regulatory authority, AT&T itself purchased BellSouth, which it had divested as an RBOC only 24 years earlier, for the sum of $86 billion in 2006. Now, these same legacy telecommunications companies, like AT&T, enjoy the same monopoly privilege that they were forced to divest over only about 40 years ago. AT&T abuses its monopoly position in broad daylight, as was evidenced by their paying a $23 million settlement to resolve a federal criminal investigation brought by the Department of Justice. In 2017, AT&T Illinois President Paul La Schiazza conspired with Speaker Michael J. Madigan, a friend of Madigan, and “other parties” to arrange a payment of $22,500 in order to essentially buy votes that AT&T found agreeable with its agenda (in this case, to step away from its copper landline obligations in Illinois). Returning to FCC Order 19-72, we see that the federal government continues to not only fail to pass or enforce effective regulatory measures on historically highly-regulated telecom companies, but that the FCC, under former Chairman Ajit Pai’s stewardship, actually went out of its way to facilitate these telecom companies in achieving their own goals. Citing “red tape”, Chairman Ajit Pai trotted out the oft-deployed Republican rhetorical device that regulation is what’s holding companies back from necessary innovation and investment, and that deregulation would solve such broad infrastructure problems as lack of telecom service in rural communities. In spite of ongoing subsidies from the federal government to telecom companies, these upgrades have a mysterious way of never, or rarely, materializing. We need look no further than the failed effort to break up AT&T’s monopoly in 1982 if we want a case study in the almost comical failure of deregulation to achieve the ends its proponents promise. As it stands now, in 2023, the FCC order to allow telcos to walk away from copper is in force, with no real roadmap for replacing copper in low income communities and rural areas, in particular. Of equal importance, there seems to have been little consideration by the FCC of the technical limitations of fiber-based networks as far as landline phone functionality is concerned. Copper-based phone systems carry passive voltage throughout the system, around 48 volts DC when the phone handset is on the hook. This passive voltage allows standard phone handsets, which are not externally powered, to remain online during a power outage. While this may not seem like an issue of major importance in the era of nearly-ubiquitous cell phone ownership, it is an issue which has not been properly addressed by fiber-based Voice-over-IP (VoIP) phone systems. Smartphones rely on batteries for power and on cell tower backup generators for network connectivity, and can’t guarantee reliable 911 or emergency service connectivity during an extended power outage, so placing our faith in smartphones to bridge the gap that telco companies’ neglect has caused is not a very effective solution. During a power outage, VoIP systems will go offline unless an appropriate battery backup system has been installed; such a backup must be of sufficient capacity to keep both an Optical Network Terminal (ONT), the fiber modem, router, or gateway, and any VoIP handsets connected during an extended power outage. Power outages in the United States, as it happens, are also becoming more frequent due to an increase in ‘extreme’ weather events and neglected infrastructure. As a consequence, backup battery systems that promise 8 hours of runtime should not be considered appropriate or sufficient. How many excess deaths can we expect thanks to monopolistic telecom companies and their allies in state and federal government allowing them to just walk away from their legal obligation to provide service? In an extended power outage event, the lack of access to a copper phone line can be a matter of life and death; alarm systems and other security devices also rely on copper phone line connectivity, in addition to traditional landline phones. The current patchwork of state-level regulation regarding copper line access only complicates the matter. VoIP service of any description, whether purchased from your Internet service provider or not, relies on effective backup power solutions at every point in the network: your home, the fiber cabinet, and the local fiber hub, depending on how widespread a given power outage is. Telcos, of course, will fight tooth and nail to spend the least money possible on ensuring continuity of service, as they have demonstrated over decades of grift, lobbying, and fighting basic, common-sense legislation–legislation that might just keep people online and alive during an emergency. While we don’t agree with the shifting of monetary burdens onto customers for the failures of telcos, Geek Housecalls can help you implement a phone system that will keep you connected in a power outage or other emergency. We have extensive experience in VoIP phone systems and in specifying battery backup systems for installations of any size. Get in touch with us today to discuss your phone system needs.

  • Microsoft mistreats its captive audience

    In the Before Times, the Windows operating system was sold on physical media: first, floppy diskettes, then CDs, then DVDs. When you purchased a physical copy of Windows (which until the mid-2000s was the only way to purchase it), you also purchased a license to use that copy of Windows. This license, while subject to Microsoft’s End User Licensing Agreement, was basically immutable; you bought a physical item and you owned a tangible good. In the years since Windows 7, Microsoft has labored tirelessly to turn Windows from a piece of software you bought every few years into a service that you rent. While Microsoft won’t say as much, the intention is to turn Windows into a subscription, like so many other products in the modern tech sphere. Starting with Windows 11 in 2021, an active Internet connection and Microsoft Account are now mandatory in order to complete Windows setup on a new PC (or on an existing PC being upgraded from Windows 10). While there were methods to bypass this requirement in earlier builds of Windows 11, Microsoft has since focused on eliminating such workarounds. The long and short of it is: Microsoft is dedicated to killing the Local Account, whether you like it or not. Windows 11 Pro, it must be noted, is exempted from this mandate (for now) and offers users an “I don’t have Internet” option during setup. Windows 11 Home users are not so favorably considered. While there remain a couple of ways to fool Windows 11 Home into letting the end user create a Local Account, it’s impossible to know for how much longer those tricks will work. The first way to fool Windows is to disable your Internet connection during setup, which is easy if your PC is plugged into an Ethernet cable (just pull the cable!), but becomes more of a chore if your PC connects strictly over WiFi; some users report pressing Alt+F4 when they reach the “Let’s get you connected” screen works to bypass this requirement, while others report it doesn’t. At this point, the use of the Command Prompt usually becomes necessary, requiring users to test various unsavory methods found on Google just to create a Local Account. None of this is to proclaim that Windows users shouldn’t be given the option to use a Microsoft account if they want, but rather that an option should be provided at all. The current paradigm within Big Tech of automatically opting unwitting users into things they don’t want or understand is reflected in Microsoft’s efforts to pretend Local Accounts never existed. And the strangest part of it all is that this behavior from Microsoft is so expected and mundane that the tech media has come to largely stop reporting on it. Rather than collectively hold Microsoft’s feet to the flame for their abuse of a largely captive audience, we’ve instead agreed to just suffer through it. Microsoft, being no stranger to antitrust lawsuits, has to imagine that they are walking on thin ice as they implement more anti-user and anti-competitive features that allow them to abuse their monopoly desktop operating system position. While Microsoft may hope to avoid further antitrust litigation, it has recently made itself a target by bidding to acquire video game giant Activision for an astounding $69 billion. Now, Microsoft’s legal team might, with some success, argue that it no longer has a monopoly in the operating system space. Rivals Apple and Linux-based operating systems do exist, and are suitable replacements to Windows for a minority of current Windows users. Whether Microsoft acknowledges it or not, it has a largely captive audience in its Windows user-base. When one considers that Microsoft Windows and Microsoft Windows Server are staples of the vast majority of corporate and government institutions, it becomes clear that just dumping Windows for Apple’s macOS or a variant of Linux is not only unlikely, it’s absurd. While Microsoft’s Windows hegemony has diminished from a lofty 92.02% of the total market in January 2011 to 74.14% in January 2023, it’s obvious that Windows is still king. Decades of backroom deals with other tech giants like Intel and vendor lock-in set Microsoft up for perpetual success, with little incentive to actually improve its products. While Linux might run the world, with the open source kernel powering operating systems that run on 96% of the world’s top one million web servers, Microsoft Windows is still an 800-pound gorilla in both end-user operating systems and corporate IT. Where does this leave users who need Windows because their software doesn’t run on anything else? Stuck. It doesn’t matter if Windows is the best solution, or the most affordable, or the easiest to integrate with an existing tech stack; if your program doesn’t have a Linux or macOS equivalent, you have no recourse. Happily, this forced marriage to Windows has begun to wane with the rise of cloud applications, which run in a web browser and are operating system agnostic. Adobe, once considered a Windows-only developer, now offers its Creative Cloud suite of applications which is available for both Windows and macOS (though, sadly, not Linux-based operating systems like Ubuntu). Microsoft’s own cash-cow productivity suite, Microsoft 365, is also available in web app form, which extends its reach to both macOS and Linux users. Microsoft even now claims to embrace the Linux kernel, having implemented Windows Subsystem for Linux in Windows 10 back in 2016. Such trends would indicate that Microsoft wants to distance itself from its checkered past as a monopolist, an industry bully. Why, then, does Microsoft continue to do its determined best to make Windows more stifling, more restrictive, and less usable with every subsequent release? The answer is perhaps not a surprising one: Microsoft wants an ecosystem. Apple, its largest rival in the desktop operating system space, has an ecosystem, and profits handsomely from tightly controlling the hardware, software, and services its users rely on. Having failed miserably in its foray into smartphones and being battered in the developer and server space by open source, Microsoft appears to be relenting and grudgingly embracing open source. Forgive my cynicism when I say “grudgingly”, as it stems from Microsoft’s public and private dismissal of open source software over the last three-and-a-half decades. Under current CEO Satya Nadella’s leadership, the culture at Microsoft does seem to have changed for the better, relative to the ‘bad old days’ under Steve Ballmer’s and Bill Gates’ tutelage. Attitudes toward open source at Microsoft in its Azure, IoT, and services divisions may well have shifted positively, but Windows remains a puzzling contradiction to those pro-open source shifts. We must consider that the Windows NT kernel is a sprawling, complex mess that few Microsoft engineers even understand, and that rewriting the Windows kernel would be a truly Herculean undertaking. Having said that, nothing changes if nothing changes. The momentum Microsoft has accrued in its other divisions must be applied to its Windows product or the operating system is destined to wither, adding to its market share losses since the turn of the century. Ultimately, we must consider Microsoft's new idea for its operating system now: to sell ad space. Microsoft's enduring obsession with injecting ads directly into Windows has miffed more than a few of its users, but it continues to toy with the idea, sometimes boldly, sometimes with more timidity. Application "stubs" for the likes of Instagram, Hulu, ClipChamp, and Disney+ litter the Start menu of a freshly-installed version of Windows 10 or Windows 11 now, thanks to undoubtedly lucrative partnerships that Microsoft has forged with these companies. These stubs are easily uninstalled and serve as more an annoyance than a privacy or security issue, since they aren't full applications, but they foretold a more ominous advertising future for Windows: Microsoft's current push to bake non-removable OneDrive and other first-party ads directly into the operating system itself. The sentiment toward these practices may be negative, but most people won't throw out their Windows PC because of it. Perhaps the larger issue at play isn't what Microsoft, Apple, or Google are doing with their products, but rather advertising itself. Technology and advertising have become so interwoven that the only way to keep ads from eating reality itself is effective legislation and regulation. We'll discuss the state of digital advertising, its future, and its unintended consequences in a later post.

bottom of page