top of page
Connecting Dots

> Search Results

31 items found for ""

  • 9 Tips for Troubleshooting Networking Issues

    There's nothing quite as frustrating as a sudden internet outage. Whether you're working from home, streaming your favorite show, or simply staying connected with friends and family, a loss of internet can disrupt your entire day. The question that inevitably arises is, "why is it down, and when will it come back?" While there's no one-size-fits-all answer, there are several troubleshooting steps you can take to diagnose and potentially resolve the issue. Basic Troubleshooting: The Quick Fixes Reboot Your Router:  This classic tech remedy often does the trick. A simple restart can refresh your router's connection and resolve minor glitches. Check Connections:  Ensure all cables are securely plugged into your router, modem, and devices. Loose or damaged connections can interrupt the internet signal. Advanced Troubleshooting: Network Diagnostics Run a Network Test:  To assess the overall health of your network, use the ping command in your device's command prompt. Pinging a public DNS server like 1.1.1.1 will provide information about the latency and packet loss between your device and the internet. Refresh IP Address:  Sometimes, a simple IP address renewal can resolve connectivity issues. Use the ipconfig /release and ipconfig /renew commands to force your device to obtain a new IP address. Software and Firmware Updates Update Router Firmware:  Outdated firmware can lead to stability problems. Check your router's manufacturer's website for the latest firmware updates and install them as recommended. Provider-Related Issues Contact Your Internet Service Provider:  If the above steps don't resolve the issue, it's possible there's a broader outage in your area. Contact your ISP to check for any known service disruptions or planned maintenance. Occasionally, a faulty piece of outside equipment, such as a passive coaxial splitter, is the source of trouble in your network. While end users can replace these devices, we suggest contacting your Internet Service Provider and reporting the issue so that they may troubleshoot and resolve the issue. Seeking Professional Assistance Consult Your IT Support Team:  If you're unable to resolve the issue on your own, reach out to your IT support team for further assistance. They can diagnose more complex problems and provide tailored solutions. Consider Professional IT Services:  If you don't have an in-house IT team, consider partnering with a professional IT service provider like Geeks for Business. We offer comprehensive network support, security, troubleshooting, and maintenance. Don't Let Internet Outages Disrupt Your Business A reliable internet connection is essential for businesses of all sizes. By following these troubleshooting steps and seeking professional help when needed, you can minimize the impact of internet outages and ensure uninterrupted productivity. If you're facing challenging network-related issues, reach out to Geeks for Business today. Whether you're managing a single office, working from home, or connecting multiple sites, Geeks for Business helps clients implement secure networking solutions that keep business running.

  • Crowdstrike: A Drama in Three Acts

    — Mistah Kurtz-he dead A penny for the Old Guy On July 19th, 2024, Crowdstrike, a well-known cybersecurity provider within private and public IT realms, managed to send industries from air travel to healthcare into various states of meltdown thanks to an unvalidated agent update. This update threw millions of Windows devices into bootloops, sending companies large and small into utter technical chaos for days. Delta Airlines CEO Ed Bastian alleges Crowdstrike’s botched update cost Delta $500 million, a claim which Crowdstrike CEO George Kurtz has dismissed, pinning the blame on Delta for its refusal to accept technical assistance from Crowdstrike.  Delta’s legal action against Crowdstrike is one in a sea of lawsuits materializing against the company for the massive losses the defective update caused. Overall monetary losses to Crowdstrike’s Fortune 500 clients are estimated to be around $5 billion .  To understand why this buggy update was so devastating, we must understand the way Crowdstrike’s Falcon EDR (endpoint detection and response) platform works. The Falcon EDR agent requires a kernel-mode driver which grants it low-level access to the Windows operating system.  This kernel driver allows the Falcon agent to continuously monitor Windows user space and kernel space for malicious executables, attachments, and other potential cybersecurity threats. This model provides a substantial degree of cybersecurity protection to the device, but the use of a kernel-mode driver presents a double-edged sword: if the driver fails to initiate the Crowdstrike agent correctly, the operating system can crash and subsequently fail to boot.  Microsoft’s response  to this was to issue a technical incident response memo about the Crowdstrike failure, discouraging security vendors from using kernel-mode drivers. Crowdstrike’s kernel-mode driver, while signed and blessed by Microsoft, relies on frequent updates from Crowdstrike which are not individually signed or audited by Microsoft, or any other third-party.  Thus, while the driver itself was deemed safe, the Falcon agent failed to parse a bad configuration file from Crowdstrike, causing the program to access memory it shouldn’t have accessed, bringing down the kernel and the operating system along with it.  Kernel-space code runs close to the hardware (or ‘near the metal’), which is advantageous for cybersecurity applications that need low-level operating system access.  This low-level access, however, has to be weighed against the potentially devastating outcomes of a bad update or untested configuration change. David Weston, Vice President of Microsoft’s Enterprise and Operating System Security, outlined a process  for granting a security application’s kernel-space access while reducing the risk to the kernel in the event of a botched update: "For example, security vendors can use minimal sensors that run in kernel mode for data collection and enforcement, limiting exposure to availability issues," he explained. "The remainder of the key product functionality includes managing updates, parsing content, and other operations that can occur isolated within user mode where recoverability is possible." Crowdstrike places the blame for the failed update on its content validation pipeline. What remains unclear is how many standard industry practices Crowdstrike has actually adopted, such as sandboxing for update and change testing.  Worryingly, this kind of failure suggests a lack of industry-standard CI/CD (continuous integration/continuous deployment) practices which most likely could have prevented the global outage caused by the bungled update. In the wake of Crowdstrike’s global IT wreckage, Microsoft announced its intention to "to work with the anti-malware ecosystem to take advantage of these integrated features to modernize their approach, helping to support and even increase security along with reliability." Microsoft’s guidance involves: Providing guidance and best practices for updating and rolling out of cybersecurity product patches Reducing need for Windows kernel space access in order to obtain critical security information Implementing enhanced isolation and anti-tampering capabilities in Windows, utilizing tech like VBS Enclaves Implementing Zero Trust approaches such as High Integrity Attestation, providing a method for determining the security status of a computer by monitoring its native security features (eg: Windows Defender for Endpoint) In spite of the initial backlash against Crowdstrike for its sloppy patching pipeline, most Crowdstrike customers report they plan to remain customers rather than migrate to competing cybersecurity platforms. Whether this speaks to the inherent complexity of switching cybersecurity providers is an open question; in the tech space, vendor lock-in is an ongoing problem that hasn’t eased much in recent decades.  Some tech analysts have even suggested that now is the time to buy into Crowdstrike’s platform, as Crowdstrike is incentivized to beef up its CI/CD practices and focus on delivering a stable product, in light of its very public failure to follow best development and test practices. No matter where Crowdstrike or its users land, the incident points out very real flaws in modern endpoint detection and response platforms. Crowdstrike isn’t the only company whose application relies on kernel-level access to the host operating system. This practice is widespread throughout the industry, with Microsoft bearing its own share of culpability in all but forcing security vendors to play in the dangerous kernel sandbox in order to develop security apps that do what they claim on the tin.  Microsoft’s response to the Crowdstrike fracas is a naked attempt to make Microsoft look good while not-so-subtly throwing security vendors under the bus for doing what they had to do to make their products function as intended. In Microsoft Land, throwing corporate partners, lucrative resellers, and end-users under the bus when push comes to shove is nothing new.  In a world where technology is becoming increasingly enshittified , prices increasingly stratospheric, and technical sanity increasingly hard to find, the Crowdstrike fiasco underscores a more central need: to end over-reliance on single vendors, and on single points of failure in general. A company like Crowdstrike or Microsoft will promise you the world (and sell it to you, at prices to match), but when things fall apart they won’t be there to save your business.  This is the importance of local IT: your business is only as resilient as the IT processes you have in place. Microsoft will sell you Exchange Online, but they won’t test your backups for you. Crowdstrike will sell you Falcon, but they aren’t going to perform phishing simulations or work with your employees to understand modern security threats. These companies are selling a solution, and it’s up to you to implement that solution in a way that makes sense for your use case while following best industry practices.  Get in touch with Geeks for Business today to learn more about how managed IT can keep your business running, even when the Crowdstrikes of the world drop the ball.

  • Microsoft's Enshittification of Everything

    “Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification.” Cory Doctorow, "Too Big to Care: Enshittification is a Choice" Since the release of Windows 11 in late 2021, Microsoft has made a series of increasingly unpopular changes to the Windows ecosystem, bringing into question the future of the operating system. In February 2023, Microsoft unveiled Copilot, a generative AI chatbot that served to replace Microsoft’s Cortana. While Microsoft previously claimed more than 150 million  Windows users used Cortana, that estimate was probably more than a little optimistic.  Cortana’s deep integration into Windows 10 allowed users to more effectively perform tasks with voice commands, but it lacked a clear path to monetization; that is, Microsoft couldn’t figure out how to turn Cortana into a paid service. With Copilot’s introduction in early 2023, development of Cortana ended and Microsoft redirected its Cortana resources into Copilot. While Apple makes use of ChatGPT’s AI models for its own Apple Intelligence product, Copilot is developed by GitHub (a subsidiary of Microsoft) and is powered by OpenAI’s Codex AI model.  Having joined the big tech AI chorus, Microsoft is placing Copilot and generative AI front-and-center in its consumer-facing product portfolio. Surprisingly, Microsoft is also pushing its AI agenda in business and enterprise-focused products, like Windows Azure, Entra, and Microsoft 365 for Enterprise. Considering enterprise customers are traditionally more conservative and adhere to slower adoption and upgrade cycles, Microsoft’s aggressive AI push across all fronts seems a risky gambit.  As with other consumer-facing AI services, Microsoft has positioned Copilot as a “freemium” product, offering a free tier with more limited features and a paid tier with more advanced features. To bolster this agenda, Microsoft, following in Apple’s footsteps, revealed its plans to introduce ARM-powered Windows PCs with deep Copilot integration, under the Copilot+ moniker.  Perhaps the most controversial feature of Microsoft’s Copilot+ platform is Recall , a feature that Microsoft claimed would let you search everything on your PC using natural language, thereby removing traditional barriers to finding changes you’ve made to documents, edits to photos, and so on. It didn’t take long, however, for Recall to be skewered by security researchers as a cybersecurity nightmare; relying on an unencrypted database of screenshots, Recall actually wasn’t secure at all. Anyone with local access to the computer could easily exfiltrate this database of screenshots, containing untold troves of sensitive user data.  Privacy implications for individual users aside, more questions were raised about compliance and data security in corporate and government environments. There were too many unanswered questions about the technical implementation of Recall, how easy it would be to disable, whether it would stay disabled once turned off, or how system administrators would deal with managing it at scale. Microsoft said precious little about the issue of data security and user privacy until  public backlash forced them to push back Recall’s release date  until an unknown future date.  Microsoft’s obvious bungling of Recall’s technical implementation and its initial retrenchment when faced with public criticism speaks to more insidious, deeply-ingrained problems at the company. While layoffs are common throughout the tech industry, Microsoft has often been at the fore when it comes to dismissing entire teams and divisions within the company. After finalizing its Activision-Blizzard deal in October 2023, Microsoft fired 2000 employees from its gaming division, or about 10% of all employees within the gaming unit. During the first nine months of 2023, Microsoft reduced its workforce by 16,000, outstripping the 10,000 layoffs it forecast at the beginning of 2023. In reducing its gaming unit headcount, Microsoft shuttered multiple game studios , including Arkane Austin, Tango Gameworks, and Alpha Dog Games. Microsoft’s treatment of its gaming division is only a microcosm of the wider video game industry’s treatment of its own talent: in February 2024, Sony fired 900 employees from its Playstation division, and Take-Two Interactive (parent company of Rockstar Games) announced plans to cut its workforce by 5% and end development on several games. None of this apparent dysfunction is really shocking, considering corporate acquisitions inevitably result in mass layoffs– roughly 30% of employees are deemed redundant when companies in the same industry merge . We can’t hold Microsoft to a separate standard for post-merger practices, considering the fetish for layoffs is one that’s shared throughout the Fortune 500. On the other hand, in spite of Microsoft’s massive war chest and their appetite for acquiring companies and intellectual property, its cybersecurity practices are in an apparent state of free-fall.  AJ Grotto, former White House cyber policy director, claims Microsoft is a “national security threat” , due to their monopoly position within the industry, especially within the realm of government IT. In June 2023, Chinese government-backed agents engaged in an attack on Microsoft Exchange Online,  facilitated by Microsoft’s lackadaisical security policies, leading to the U.S. Cybersecurity and Infrastructure Security Agency to demand “fundamental, security-focused reforms” to happen immediately at Microsoft. On April 2, 2024, CISA issued an emergency directive calling for immediate remediation  of a major security breach, involving Russian state actors exfiltrating data from Microsoft email systems. The CISA writes in its directive:  “The Russian state-sponsored cyber actor known as Midnight Blizzard has exfiltrated email correspondence between Federal Civilian Executive Branch (FCEB) agencies and Microsoft through a successful compromise of Microsoft corporate email accounts. The threat actor is using information initially exfiltrated from the corporate email systems, including authentication details shared between Microsoft customers and Microsoft by email, to gain, or attempt to gain, additional access to Microsoft customer systems. According to Microsoft, Midnight Blizzard has increased the volume of some aspects of the intrusion campaign, such as password sprays, by as much as 10-fold in February, compared to an already large volume seen in January 2024.” Microsoft’s position on internal cybersecurity practices seemingly hasn’t changed, in spite of CEO Satya Nadella’s commentary on Microsoft’s broken security culture.  Nadella said,  “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security. In some cases, this will mean prioritizing security above other things we do, such as releasing new features or providing ongoing support for legacy systems.” Microsoft’s various commitments to improved security across its customer-facing products seem more like nebulous promises, while its position on internal security falls somewhere between “scattershot” and “completely undefined”. Microsoft’s massive size, knowledge siloing, and responsibility to maintain huge parcels of legacy code likely all contribute to the brokenness of its internal and external cybersecurity practices. An organization the size of Microsoft requires massive investments in cybersecurity, security-awareness training for teams across all units, and external audits to demonstrate security practices are actually being followed. So far, though, Microsoft’s only incentive to improve security practices is the threat of losing market share to competitors; in government IT, which accounts for a significant portion of Microsoft’s revenue, there is no competition. Meanwhile, the U.S. government has proven itself toothless in handing down reprimands that actually hurt serial security and privacy offenders. Such fines are considered to be the cost of doing business for companies like Microsoft.  Let’s consider, then, the likelihood of the following two scenarios: (1). a serious competitor to Microsoft will appear within the next few years, and will force Microsoft to change its security and privacy practices, lower prices, and listen to customer feedback; (2). U.S. regulatory agencies will hand down multibillion dollar fines to companies like Microsoft that serve the purpose of substantially damaging the company’s financials if they fail to comply with industry regulations. Given the direction of our political institutions, including the Supreme Court, the odds of the U.S. government holding abusive monopolies to account seem poor.  Likewise, the odds of a serious competitor to Microsoft emerging anytime soon are remote at best. It seems obvious that we, as tech consumers, are arriving at a crossroads where we have to reconsider our relationships with companies like Microsoft. As more productivity software becomes web-based and the average person’s need for processing power and storage decline, Microsoft’s Windows hegemony appears precarious.  Microsoft is cognizant of the changing dynamics of end-user computing, of course, which is why it is positioning itself as a services company, not the boxed software outfit it used to be. The longer term strategy at Microsoft may well be to convert Windows itself into a monthly or yearly subscription,  if its efforts to monetize mined data from current Windows installations don’t pay the dividends it wants .  Microsoft’s whipsawing of Windows users on the issue of local user account creation in Windows 11 ties into the general enshittification of Microsoft products. Despite some changes in its stance on the matter of local accounts versus Microsoft accounts, Microsoft’s long term strategy with Windows 11 has been to discourage users from creating local user accounts when setting up Windows . While Microsoft accounts were previously optional, they are all but mandatory now; this mandate puts users who don’t have or don’t want a Microsoft account in a compromising position. More troublingly, Microsoft’s decision to make local user accounts optional only in more expensive (or unavailable to the general public) versions of Windows raises more questions about the company’s abuse of its monopoly position in the consumer computing space.  Previously available workarounds to avoid Microsoft’s account dictate are slowly being stamped out, leaving users with fewer options to use a local Windows account. In a disturbing twist, this online account mandate means that if a user’s computer doesn’t ship with compatible networking drivers, the machine can’t connect to the Internet during setup and a local account option is unavailable, leaving the user in a sort of Purgatory until compatible drivers can be integrated into a custom Windows image, which is far outside the average user’s technical capacity.  As things stand, a confluence of poor practices, anti-consumer policies, and monopoly abuse have put Microsoft in a position where governments and enterprises increasingly question their competency, and end users question the need for Windows at all. Microsoft may not have meaningful competition in government and enterprise IT, but its behavior will hand its competitors all the rope they need to hang Microsoft. While Microsoft may envision a future in which it can double-dip by monetizing user data and converting its entire portfolio to monthly subscriptions (see: Adobe ), it fails to properly heed the rising threats of Apple’s macOS and Google’s ChromeOS. In January 2013,  Microsoft Windows maintained 91% of the desktop operating system market; in November 2023, that percentage had fallen to 72% . During that same time period, Linux’s market share has grown from less than 1% to over 4%, and ChromeOS (which is based on Linux) has become a juggernaut in educational settings.  Microsoft’s insistence on ignoring the user experience, milking its government and enterprise clients for all they’re worth, and antagonizing the federal government by failing to secure its own infrastructure is leading it, and us, down the garden path to oblivion. As enshittification within the tech space accelerates, we have to reconsider what our data security and privacy are worth. A false sense of convenience has led the average user to change the way they value ownership, security, and privacy while the stakes in cybersecurity have never been higher.  Hyper-normalization of data theft, foreign espionage, and state-backed cyberattacks have led people to expect and accept piss-poor behavior from giant tech companies at a time when these companies should be held to higher standards rather than excused from any real liability.  If you do what you’ve always done, you get what you’ve always gotten. It’s time to abandon bad platforms and reject bad policies, even if it is temporarily inconvenient. Watchdog groups are toothless, and the government certainly won’t do it for you.

  • Seen and Unseen: The AI You Know, The AI You Don't

    While corporate America carries out its work of finding new ways to put artificial intelligence and large language models in places they shouldn’t be, the utility of discriminative AI goes largely unheralded in the public consciousness. Companies like Microsoft and Google are spending billions marketing generative AI. Generative AI deals with the creation of new data that resembles existing data (for instance, creating a painting which is derivative of an existing work). Discriminative AI, meanwhile, deals with the classification and recognition of existing data. Discriminative models rely on supervised learning, where datasets fed into the model are labeled and in which each data point corresponds to a label or category. Discriminative AI models are used in applications such as facial recognition, spam email filtering, image recognition, and sentiment analysis. Generative AI, meanwhile, is employed in the creation of realistic images and videos, new musical compositions, and text generation (for example, writing an essay or email on behalf of a human user). Chatbots are based on a specific type of AI known as large language modeling (LLM). Large language models are trained on massive text datasets and are particularly adept at translation, text generation, text summarization, and conversational question-answering. Google Gemini, for instance, is a collection of LLMs that functions as a conversational chatbot–when you give Gemini a prompt, it replies in a ‘conversational’ way to approximate an interaction with another person. As a result of generative AI’s novelty and its wide array of applications in the average user’s life, Big Tech has seized on inserting it into everything from web search to online chatbots. While we struggle to fully appreciate the longer term consequences of hastily deploying generative AI models in every corner of our lives, the momentum of AI only grows. There doesn’t seem to be a consensus on the matter of whether AGI (artificial general intelligence) will ever materialize, or if it does, what risks it poses to humanity; at present, we’re still in generative AI’s infancy and our regulatory approach is still very much in flux. Currently, AGI is still a hypothetical; theoretically, an AGI model could reason, synthesize information, solve problems, and adapt to changing external conditions like a human can. The risks inherent with such a technically capable form of AI are hard to understate. In short, when it comes to AGI: we don’t know what we don’t know. An AGI that outsmarts its creators could become impossible to control, leading to a potentially devastating sequence of unintended consequences for humanity. An AGI that decides its values don’t align with human values, for example, could shut down power grids, launch massive cyberattacks against allied or enemy nations, and be used as a powerful tool in disinformation and social manipulation campaigns. Again, these concerns are all theoretical, but when we consider the rate at which AI as a computer science discipline is evolving, we shouldn’t discount the possibility of a future AGI. The moral-ethical and regulatory concerns surrounding AI mount by the day, and state governments are only now getting to grips with what regulating generative AI in particular will entail. The Connecticut State Senate recently introduced legislation to control bias in AI decision-making and to protect people from manufactured videos and deepfakes. The state is one of the first in the U.S. to introduce legislation targeting AI, and the usual cadre of mostly Republican opponents has seized the opportunity to claim the bill will “stifle innovation” and “harm small business”. How, exactly, regulating generative AI will negatively affect the average small business in any meaningful way remains to be seen. But even so, opponents to the bill may have another point: complex legislation on a complex and rapidly-evolving subject is going to bring with it unintended consequences. We find ourselves in an increasingly dire situation, then, where our legislators–largely geriatric and unplugged from the modern technological zeitgeist–are writing ineffective legislation on matters they don’t even peripherally grasp. In fact, most computer science graduates working in their respective fields didn’t specialize in artificial intelligence, so we’re really desperately relying on a vanishingly small percentage of the population to navigate these fiendishly complex issues. AI and ML are rapidly evolving fields and, owing to their popularity, more students majoring in computer science are now focusing on AI, but there is a significant lag between graduating with a specialization and becoming an expert in that specialty. Generative AI in the form of Alexa telling us a joke is the thing we see, but the reality of trying to manage a world in which AI has embedded itself is the thing we don’t. Since the Internet’s rise to ubiquity during the 1990s, legislation and regulation have lagged behind the explosive growth of technological advancement. We’ve been fighting an uphill battle to try to elect representatives who understand technology from a voter base that also largely doesn’t understand technology very well. As this rate of change accelerates within the disciplines of AI and machine learning, we need experts in these fields who can respond effectively to these changes. In short, we are running up against the limits of effective governance when those doing the governing aren’t digitally literate. Of equal concern is the idea that many of our aging representatives are surrounded by legions of aides and advisors who may well whisper in their ears that AI needs no regulation while those same advisors buy stakes in companies developing their own AI models. The recalcitrance that members of the Republican party have displayed on the subject of effective AI regulation is par for the course, but in this particular case their oppositional defiance is uniquely dangerous to the public. AI is a Pandora’s Box–we don’t know how an AI model will hallucinate or how far disinformation generated by AI will spread before a human hits the kill switch. By integrating generative AI into the social fabric, we’re essentially entrusting humanity’s combined effort and treasure to an entity that has to be constantly managed, reviewed, and course-corrected to behave in a sane and predictable way. This is a more monumental task than most people who only have a peripheral understanding of AI seem to realize. The meteoric rise of machine learning within the field of AI seems also to be ushering in a new kind of societal disparity: technological. Those who control the algorithms that make up the body of AI will have a certain degree of power over virtually every aspect of human life; maybe this was the endgame that companies like Meta, Alphabet, and Microsoft had in mind since the outset. As we discussed in an earlier article about YouTube’s methodology for promoting, recommending, and suppressing video content, how effectively can we regulate an industry when most of its doors are sealed to the public? It becomes increasingly clear that Big Tech is expediting its work in separating itself from society; they’ve spent the last 20 years digging a moat and creating a fiefdom that operates beyond the grips of the law. As companies like IBM, an AI forerunner in its own right, expand their influence by buying or killing the competition, power within Big Tech becomes more consolidated and key decisions in the realm of AI are made by fewer and fewer people. Maybe all of this has less to do with AI and more to do with the notion that tech companies have developed a kind of power that the world hasn’t yet seen: the power to effectively manipulate reality. If we’re all living in The Truman Show and we don’t even know it, how would we know anything is wrong? Or, maybe we’re allowed to know there are problems with AI but only in a superficial sense. When algorithms guide you along a set of tram rails it must be asked: are these merely suggestions by the algorithm, or are they neatly packaged directives? On the other hand, discriminative AI works comparatively quietly in the background and to much less public fanfare, processing massive datasets that enable so many of the services we now take for granted. And there’s good and valuable work to be done here: as the Internet grows in size, so, too, does the volume of data companies and individuals have to manage and contextualize. Without discriminative AI models, not many of the digital experiences we enjoy would be possible. Even with AI, the amount of data that the rapidly growing number of devices on the Internet generates raises serious manageability questions for the future. There are nearly endless applications for discriminative AI in science, medicine, biotechnology, meteorology, climatology, and a number of other hard-science disciplines. As with so many evolving technologies in life, there are important, practical uses for AI in science, research, and engineering, but the potential for abuse on the consumer-facing side is so staggering that effective legislation really can’t come soon enough. AI is a tool like any other. What we have to contend with in Big Tech is not so much limited to AI; we have to contend with a group of self-appointed technocrats who have, time after time, shown total disdain for the public good. The list of companies who openly sell your personal data to third parties (when they aren’t losing that data to cyberattacks, that is) is long and ignominious. These are the companies who present users with 157-page Terms of Service agreements which in any other context would call for review by a lawyer. The same companies who can deplatform people or groups they find personally disagreeable. The very companies who can freeze your funds, revoke your domain, shut down your email, or delete your files–all, usually, with no real consequences from our intrepid regulatory authorities. So, the question then becomes: do you trust that tech leaders can and will self-police with tools as powerful as these?

  • Beware of these emerging cyberthreats in 2024

    The global cost of a data breach last year was $4.45 million. This is an increase of 15% in three years. As we step into 2024, it's critical to be aware of emerging technology threats--threats that could potentially disrupt and harm your business. Technology is evolving at a rapid pace. It’s bringing new opportunities and challenges for businesses and individuals alike. Rapid developments in artificial intelligence (AI), machine learning (ML), and quantum computing are leading companies across all industries to radically reconsider their approach to cybersecurity and systems management. While these technologies are poised to make our lives easier, they're also being used to launch sophisticated, large-scale attacks against the networks and devices we depend on. In this article, we’ll highlight some emerging technology threats to be aware of in 2024 and beyond. Data Poisoning Attacks Data poisoning involves corrupting datasets used to train AI models. By injecting malicious data, attackers can skew algorithms' outcomes. This could lead to incorrect decisions in critical sectors like healthcare or finance. Some actions are vital in countering this insidious threat. These include protecting training data integrity and implementing robust validation mechanisms. Businesses should use AI-generated data cautiously. It should be heavily augmented by human intelligence and data from other sources. 5G Network Vulnerabilities The widespread adoption of 5G technology introduces new attack surfaces. With an increased number of connected devices, the attack vector broadens. IoT devices, reliant on 5G networks, might become targets for cyberattacks. Securing these devices and implementing strong network protocols is imperative. Especially to prevent large-scale attacks. Ensure your business has a robust mobile device management strategy. Mobile is taking over much of the workload Organizations should properly track and manage how these devices access business data. Quantum Computing Vulnerabilities Quantum computing, the herald of unprecedented computational power, also poses a threat. Its immense processing capabilities could crack currently secure encryption methods. Hackers might exploit this power to access sensitive data. This emphasizes the need for quantum-resistant encryption techniques to safeguard digital information. Artificial Intelligence (AI) Manipulation AI, while transformative, can (and is) being used to facilitate the spread of misinformation. Cyber criminals are already creating convincing deepfakes with AI, and automating phishing attacks. Vigilance is essential as AI-driven threats become more sophisticated. It demands robust detection mechanisms to discern genuine from malicious AI-generated content. Regulatory bodies and watchdog groups have proposed mandatory watermarks for AI generated content to make it easily discernible from human-generated (or human-reviewed) content. Augmented Reality (AR) and Virtual Reality (VR) Exploits AR and VR technologies offer immersive experiences. But they also present new vulnerabilities. Cybercriminals might exploit these platforms to deceive users, leading to real-world consequences. Ensuring the security of AR and VR applications is crucial. Especially to prevent user manipulation and privacy breaches. This is very true in sectors like gaming, education, and healthcare. Ransomware Evolves Ransomware attacks have evolved beyond simple data encryption. Threat actors now use double extortion tactics. They steal sensitive data before encrypting files. If victims refuse to pay, hackers leak or sell this data, causing reputational damage. Some defenses against this evolved ransomware threat include: Robust backup solutions Regular cybersecurity training Proactive threat hunting Supply Chain Attacks Persist Supply chain attacks remain a persistent threat. Cybercriminals infiltrate third-party vendors or software providers to compromise larger targets. Strengthening supply chain cybersecurity is critical in preventing cascading cyber incidents. Businesses can do this through rigorous vendor assessments, multi-factor authentication, and continuous monitoring. Biometric Data Vulnerability Biometric authentication methods, such as fingerprint or facial recognition, are becoming commonplace. But users can't change biometric data once compromised, like they can passwords. Protect biometric data through secure encryption. Ensure that service providers follow strict privacy regulations. These are paramount to preventing identity theft and fraud. Advanced Phishing Attacks Phishing attacks are one of the oldest and most common forms of cyberattacks. These attacks are becoming more sophisticated and targeted thanks to AI. For example, hackers customize spear phishing attacks to a specific individual or organization. Hackers do this based on online personal or professional information. Another example is vishing attacks. These use voice calls or voice assistants to impersonate legitimate entities, convincingly persuading victims to take certain actions. Ongoing employee phishing training is vital, as well as automated solutions to detect and defend against phishing threats. At Geeks for Business, we believe that a proactive approach to cybersecurity is critical. With our trusted cybersecurity partner, Huntress, we are able to hunt for threats within networks before they become breaches. With complexity in cyberattacks rising, reacting to an attack just isn't enough; our 24/7 managed endpoint detection and response approach allows us to go on the offense against prospective cyber criminals. Tips for Defending Against These Threats As technology evolves, so do the threats that we face. Thus, it’s important to be vigilant and proactive. Here are some tips that can help: Educate yourself and others about the latest technology threats. Use strong passwords and multi-factor authentication for all online accounts. Update your software and devices regularly to fix any security vulnerabilities. Avoid clicking on suspicious links or attachments in emails or messages. Verify the identity and legitimacy of any callers or senders. Do this before providing any information or taking any actions. Back up your data regularly to prevent data loss in case of a cyberattack. Invest in a reliable cyber insurance policy. One that covers your specific needs and risks. Report any suspicious or malicious activity to the relevant authorities. Need Help Ensuring Your Cybersecurity is Ready for 2024? Last year’s solutions might not be enough to protect against this year’s threats.  Don’t leave your security at risk. We help small and medium businesses throughout Central North Carolina manage their IT, reduce costs and complexity, expose vulnerabilities, and secure critical business assets. Reach out to Geeks for Business today to schedule a chat. Article used with permission from The Technology Press.

  • 8GB of RAM Just Isn't Enough

    When a company with the resources and reach of Apple still sells a base configuration MacBook with 8GB of RAM and 256GB of disk space in 2023, something has gone wrong. This isn’t to say other laptop makers aren’t guilty of the same: Lenovo, Dell, and HP, the Big Three names in Windows-based laptops, all offer laptop computers configured with 8GB of non-upgradable RAM. This practice is demonstrably harmful to prospective customers who aren’t technically savvy; in 2023, an instance of Google Chrome with 15 or 20 tabs open, plus the demands of Windows itself, and any other apps you might be running at the same time, can easily swallow up 8GB of memory. Even if you’re a light user and never have very many browser tabs open at once, consider the implications of soldered, non-upgradable RAM: you’re stuck with this configuration for the life of the computer. If your needs ever change, or your workload requires more memory, you’ll suddenly find yourself in the market for a new laptop. Apple is perhaps the most flagrant offender of all: a brand new M2-based 13” MacBook Air starts at $1099, and configuring this laptop with 16GB of RAM (which you should consider the bare minimum if you intend to keep the laptop for several years) increases the price to an astonishing $1299–that’s a $200 upcharge for another 8GB of RAM (market price for an 8GB DDR5 memory module, meanwhile, is approximately $23). Yes, Apple has to make a profit margin. Yes, Apple’s memory architecture is different from that of traditional x86-based laptops. Yes, Apple’s memory is “Unified Memory” and is super fast, but so what? Fast memory won’t let you run more programs, work with more resource-hungry software, or allow you to open more tabs in Chrome or Safari. Worse yet, if you want more storage space on your MacBook Air, it’s another $200. Suddenly, you’re considering a $1500 laptop, and you’ve spent $400 to ‘upgrade’ to a RAM and storage specification that has been standard among Windows laptops for several years, at lower prices. But if you want macOS, there are no alternatives: Apple makes the hardware, Apple makes the software, and Apple sets the prices. And there is truth in the idea that Apple users are loyal to Apple, that they expect a premium experience and so are willing to pay a premium price. Apple’s ecosystem and integration among its own products is second to none. All of these ease of use features and other Apple niceties have an intangible value; maybe $1500 for what many would now consider a ‘base’ spec laptop isn’t a bad deal, after all. This may all be true, but Apple’s dogged adherence to offering 8GB of RAM as standard in any laptop is absurd. I’m left to wonder how many college students, perhaps pursuing their degrees in computer science or engineering, unpacked their brand new MacBook Airs and were faced with dreaded Memory Pressure warnings from macOS upon loading up all of their applications. Even for office workers who deal mainly in cloud apps, 8GB of RAM can show its limitations quickly. Although Apple’s new memory and storage systems are both super fast, constantly writing to disk when out of RAM can shorten that disk’s lifespan considerably. These are fundamental concepts of computer science that Apple must surely be aware of. After all, Apple is selling a premium product at a premium price. Why offer an entry-level hardware configuration that reads like the spec sheet of a 2013 Windows PC? Not to mention the fact that maintaining two different product lines for the 8GB and 16GB system boards that go into new MacBook Airs must be an additional expense for Apple that they could otherwise eliminate. For a company as focused on the user experience rather than getting hung up on technical specs, Apple sure seems to be going out of its way to ignore the technical specs altogether. To this, Apple might say, “if you need more than 8GB of RAM, you’re a pro user and should buy the MacBook Pro, which is configurable with up to 96GB of RAM”. Certainly, you can buy a MacBook Pro with 96GB of RAM for a cool $4300. But let’s zoom out: just because someone needs 16GB of memory, does that make them a ‘pro’ user? Apple offers its M2 MacBook Airs with up to 24GB of RAM. This seems rather intentional to gently guide users into the loving embrace of pricey MacBook Pro 14s and 16s with pricey 32GB RAM configurations. Needing 16GB of memory doesn’t immediately imply that you’re a professional video or photo editor, or that you’re working in complex financial or scientific software with massive datasets and huge RAM requirements. Maybe you’re just a modern user with modern hardware needs. Is Apple this bad at reading the market? While Apple generally makes fantastic hardware with fabulously sleek integrations with other Apple devices, their ostensible perspective that obsessing over hardware specs is gauche or unseemly does their users a great disservice. In the modern era of JavaScript-heavy websites and memory-hungry applications, 8GB of RAM just doesn’t cut it for most people, whether they immediately realize it or not. Even if memory in MacBooks could still be user-upgraded, most users wouldn’t attempt the upgrade. Most people aren’t extremely technical; in technical circles, like those I travel in, it’s easy to start to imagine that anyone could or would want to take apart their computer and upgrade RAM or install a higher capacity SSD. In reality, the average person can’t be bothered. And who can blame them? Apple is losing touch with its core audience if it fails to evolve its MacBook configurations with the times. Seemingly gone are the days of Apple’s bold hardware adventures, extremely competitive hardware specifications, and focus on power users. This is the attitude that earned Apple much of the success they enjoy today. Now, Apple seems like a company coasting on mostly past, and the occasional current, innovation. Despite Apple’s deft concealment of its devices’ technical underpinnings from its users, those technical underpinnings still exist and still matter. The average user probably has no idea how much memory or storage capacity they need. They rely on Apple (or Apple’s Geniuses) to tell them what’s best. In spite of ubiquitous cloud storage options, local storage demands are still growing, as are memory requirements; surely, a company with the technical and engineering prowess of Apple can understand the importance of these things. To put it simply: 8GB of RAM in a premium product with a premium price tag is a contradiction. It’s akin to buying a Rolls-Royce Phantom with a 4-cylinder engine and a 5 gallon gas tank. Even though Apple customers and Rolls-Royce customers probably don’t spend much time gawking at spec sheets, the specs still matter. Sadly, across the board, upgradable memory in laptops is a phenomenon on the decline. Laptop makers’ claims that people just want thinner and thinner machines, at the expense of all else (a claim those companies are loath to prove), have led to the adoption of soldered RAM in an effort to slim down devices. But, in reality, such claims are likely nothing more than marketing ephemera–a gambit to upsell people into higher specced laptops at premium prices. In our opinion, this is more than just a sales tactic used by a stagnating industry to increase revenues–it’s another salvo in the war on Right to Repair and the very notion of actually owning what you buy.

  • The day the world went away

    – COVID exposed the frailty of just-in-time supply chains, and things aren’t getting better Since 2020, procurement in the tech sector has steadily become more difficult. Corporate purchasers and consumer buyers alike continue to grapple with both finished product and component shortages. Promises that broken supply chains fomented by the pandemic would be remedied by now have gone unfulfilled. Why hasn’t that product been released yet? Why are application-specific processors still so hard to find? Why are critical components needed for networking, industrial computing, IoT, and edge computing missing in action? The supply chain crisis was never really fixed and mainstream publications have backed away from covering the issue. During the first quarter of 2020, as lockdowns and quarantines began, consumer technology devices like laptops and webcams saw a massive surge in demand. This sudden surge in demand shocked our “just-in-time” supply chain model; that is, inventory is only moved as demand forecasts dictate. This logistical model saves retailers, warehouses, and manufacturers on storage costs, but eventuates supply-side bottlenecks when a supply or demand shock hits the system. Throughout 2021 and 2022, supply chain “disruptions” continued to ripple through the economy, punctuating just how fragile our just-in-time paradigm is. Data science has made just-in-time possible: advanced demand forecasting, automated order management, sophisticated consumer behavior analysis have not only enabled just-in-time logistics, but made it the de facto standard for the industrialized world. All the while, critical component shortages persisted, in spite of companies’ best PR efforts to hand-wave the mounting problems away. The Raspberry Pi Trading Company, famous for the wildly popular Raspberry Pi single-board computer, is still struggling to increase its manufacturing output in order to meet demand for its hardware. The issue: a shortage in one component needed to manufacture a Raspberry Pi brings production to a halt. WiFi chips, Bluetooth radios, USB-C ports, voltage regulators, rectifiers, Ethernet PHYs are all necessary to make a Raspberry Pi 4, for example. If WiFi chips are in short supply, the boards can’t be finished. Car manufacturers encountered a similar dilemma, with Ford unable to send finished vehicles to dealers due to the widely-publicized “chip” shortage. Untold thousands of vehicles sat idle while carmakers waited for the supply chain to unsnarl itself. Other cars were shipped to dealers, missing highly-touted “smart” features like Apple CarPlay; if Ford, GM, and others had suspended sales of every vehicle that didn’t have every chip it needed, they would have had to contend with massive financial losses. As demand for pickup trucks and luxury vehicles increased, car-buyers were faced with purchasing an unfinished product, all thanks to the just-in-time supply chain. The snarled supply chain likewise drove lumber prices through the roof in 2021, a price shock from which the market still hasn’t fully recovered. Coupled with very hot inflation rates, sticker shock across a broad swath of industries became normalized. Just-in-time was still broken, even as the pandemic receded and government-imposed economic restrictions were lifted. By early 2022, the true scope of our supply chain wreckage was coming into view. With the headwinds of inflation, personal debt, bankruptcy, and business failures during COVID, the back of the supply chain had been broken. Shortages were going to improve, we were assured, as things got back to normal. “As things get back to normal”, a soothing refrain for a world shaken by a very real global supply chain failure. Now, in the latter half of 2023, just when things will return to normal remains unknown. As those who work in corporate IT procurement will attest, things have not gotten better. Hardware launches have been postponed while prices have risen dramatically in both end-user products such as laptops and in components. Once-commonplace corporate IT devices, like managed Ethernet switches, have seen persistent shortages while manufacturers of those devices have had to scramble to substitute new chips for the old chips they can’t easily source anymore. These chip substitutions constitute a problem all their own–due to the availability of component-level hardware, like NAND flash controllers, solid state disks have been especially vulnerable to unannounced hardware “updates”, which usually lead to reduced performance and reliability. As unscrupulous Asian manufacturers seize this opportunity to flood the market with low quality knock off chips and other hardware, companies around the world are being forced to accept a new reality where the old supply chain is gone and isn’t coming back. Where we used to be able to promise a reliable product at a certain price, we now have to accept substitute goods that may not meet the performance, feature, or reliability characteristics of the original product. Worse yet, the replacement product may not meet the clients’ needs, either. The sudden shock to our tightly-wound supply chain has given rise to a cascading failure of epic proportions. We are in uncharted territory, with no historical precedent or context to inform our next move. The biggest tech companies like Apple and Amazon have weathered the storm, relatively unscathed, owing to their massive liquidity. For the smaller (but still big) players in tech, the news isn’t so rosy. For the smallest players, such as managed service providers, small software developers, game studios, and system integrators, the state of affairs is bleak. A company with Amazon’s scale can simply tell a Chinese factory, “I’ll pay you $100 million to manufacture these chips, to my specifications” and the factory will take their money, throw out every other bid, and make the chips. The biggest players get first pick of the world’s limited high-tech manufacturing capacity. Everyone else gets to fight over the scraps. In chipmaking, bringing a new factory online can cost billions of dollars, take years to complete, and typically requires huge amounts of raw materials, including fresh water. Intel can speak to the difficulty and expense in semiconductor fabrication, as their recent missteps have shown. Of course, this assumes a complex chip design (such as the x86-based processors most of us rely on in our own computers); a simpler chip design, such as ARM or RISC, is less costly to manufacture and can be reworked to efficiently suit a wider range of applications more easily than x86. Even so, manufacturing enough ARM SoCs (systems-on-chip) for their single-board computers has also proven difficult for The Raspberry Pi Trading Company. There is no safe haven in the world of just-in-time logistics. It is possible, or perhaps even likely, that the deterioration of our gossamer supply chain will lead to a resurgence in local and more resilient manufacturing. If we have any hope of sustaining ourselves, we have to move the point of manufacture closer to its audience. Shipping a laptop battery from China to Maine, for example, is just demonstrably stupid. Growing populations and growing demand for technology will only shine a more unkind light on our current way of moving inventory around the world. Maybe just-in-time is peak capitalism and we needed to see it destroyed before our very eyes to appreciate just what’s so wrong with it. In a world that is both tech-obsessed and contending with serious environmental and resource-related challenges, reducing both the complexity of manufacturing and the impact of manufacturing are only net positives.

  • Algorithm-directed content has made reality a hall of mirrors

    How much content is organic? YouTube content creators sometimes break their own fourth-wall by admitting (often for comic relief) that the algorithm made them do it. YouTube’s algorithm is poorly understood, perhaps even by the Google engineers who develop it; the average YouTube viewer has no clue how the algorithm works, or why they see recommendations for videos that seemingly have nothing to do with their watch histories. To the outside observer, the algorithm is mostly a black box. Content creators, however, have at least some insight into what the algorithm is doing: by analyzing the performance metrics of their past video uploads, they can start to infer which kinds of content resonate with the algorithm. One might say, “the algorithm isn’t to blame–viewers decide what they like and don’t like”, but this misses so much of what the algorithm itself is doing, behind the scenes. In a 2018 interview, software engineer Guillaume Chaslot spoke on his time at YouTube, telling The Guardian, “YouTube is something that looks like reality, but it is distorted to make you spend more time online. The recommendation algorithm is not optimizing for what is truthful, or balanced, or healthy for democracy.” Chaslot explained that YouTube’s algorithm is always in flux, with different inputs given different weights over time. “Watch time was the priority. Everything else was considered a distraction”, he told The Guardian. YouTube’s algorithm behaves in ways that appear outwardly bizarre: recommending videos from channels with 4 subscribers, videos with zero views, or videos from spam and affiliate link clickfarm channels. To an uninitiated third-party, YouTube’s algorithm seems a bit obtuse, unpredictable, mercurial; perhaps the algorithm is working as intended. The algorithm, not the audience, increasingly directs the kinds of content that uploaders make. This is assuming, of course, that the uploader wants to monetize their content and collect ad revenue from high-performing videos. A channel you subscribe to that normally uploads home repair videos may decide to upload a new kind of video, where the content creator travels across the country to interview a master carpenter about his trade. This video, while very well-received among the channel’s audience, has a fraction of the total watch hours and much less “engagement” than usual. This results in lower ad revenue for the creator, but also results in the video being shown to fewer people outside the channel’s existing audience. The content was good, but the reception was poor. Was the reception poor because people just decided the video was a bit too long? Not relevant enough to home repair? Was Mercury perhaps in retrograde? No one really knows, even the channel owner. The algorithm has decided, based on its unknowable metrics, that this video is a bad video, and it won’t be going out of its way to promote it on the front page of YouTube. As a result, that content creator, despite his love of the video he made and the subject matter, will mothball his plans to create more videos in his “Interviews with a Master Carpenter” series because the money just isn’t there. Conceivably, the first video performed poorly just by virtue of its newness, not due to any intrinsic flaw in the content. Maybe subsequent videos in the series would have done better. It doesn’t matter, though, because the algorithm has all but whispered to the channel owner, “don’t make this kind of video again”. Consequently, the content creator returns to his normal fare and watch hours, views, likes, and “engagement” go back up. On a broad scale, this behavior would seem to have a chilling effect on speech itself. Algorithms and machine learning models are playing an outsize role in what kinds of content people make and what kinds of things we see. Is important work being taken out back and shot because the algorithm has concluded, based on historical data, that it won’t perform well? Or is the algorithm, even by chance, ensuring that videos it thinks are overly critical of a brand or company, or just generally problematic, won’t be seen again? Artificial intelligence doesn’t have the ability to be conscientious–it lacks self-awareness. All the same, human provocateurs can and do put their desires and agendas into these algorithms, giving AI the power to selectively dismiss and bury content that it’s been instructed not to like. It’s important not to conflate this weaponization of AI with simple clickbait. While clickbait is problematic in its own right, it serves as more of an incentive for publishing certain kinds of content, rather than disincentivizing the creator from making certain kinds of content. And make no mistake: algorithmic activism is weaponization of artificial intelligence. As we fall further down the trash-compactor-chute of tech dystopia, we need to remember that the same companies who wield AI as a cudgel against certain content were the same companies who heralded AI as a net positive to humanity. Google’s former motto was “don’t be evil”--they’ve since removed that utterance from their corporate code of conduct, as if Google needed to make it any more obvious that they are evil. Regardless, there is no “if” AI becomes a weapon because it already is one. The question is how bad will we, and our elected representatives, allow things to get before we place hard limits on AI’s scope in the public domain. Print media hasn’t escaped the problem, either. Print, if anything, is a vestigial organ. Print media follows whatever is in fashion in digital media, as it may not surprise you to learn. So, when you read your local newspaper and ask, “why is every story about a shooting or a new tech company promising to hire a whole bunch of people?”, remember the algorithm. The same algorithm that tells YouTube content creators to be very careful about publishing unpopular material. The algorithm itself defines popularity, so you’ll never really know if the videos it deep-sixes are resonating with people or not; there are no guardrails, no systems of checks-and-balances, and you can’t interrogate the Black Box. When we factor in the use of AI to write articles, product reviews, and other content, we find we really do exist in an algorithmic hall of mirrors. How can we discern which videos or articles to trust if the entire game is rigged? The quest tech companies have embarked on with AI is fairly straightforward: do whatever is necessary to beat the opponent’s AI model into submission. Then, the AI that emerges victorious gets to decide reality for the rest of us. Consider deepfakes–what if, in the future, an anti-establishment YouTube creator is framed for murder using an AI-generated confession that the man himself never uttered? This isn’t a cheesy Blade Runner-style screenplay pitch, it’s already happening. Deepfakes have already necessitated using AI to sniff out phony and doctored content. The Massachusetts Institute of Technology is currently developing a deepfake detection experiment, which has already been peer-reviewed by the Proceedings of the National Academy of Sciences of the United States of America. The new space race between large-language models in AI will only intensify, and our reality will be dragged along with it. Truepic has also developed a system to authenticate videos and images, aiming to guarantee the authenticity of digital master copies, thereby preventing the fraud and abuse that deepfakes engender. Deepfakes also promise to make corporate espionage easier, and cyberattacks harder to prevent, thanks to the sophisticated phishing attacks the technology facilitates. Because of AI-generated deepfakes, requirements for companies to obtain cybersecurity insurance will no doubt become more strigent. Cybersecurity insurers already require out-of-band-authentication (OOBA) as a defense against deepfake or impersonation-based attacks against clients; these authentication strategies are only one piece of the puzzle in mitigating these emerging deepfake threats, however. Additional software tools, authentication factors, user training, and the use of advanced AI technologies will become a necessary component in enterprises protecting their employees and clients from malevolent AI attacks. The aim of this post isn’t so much to make you doubt your experiences, but rather to encourage you to ask some pointed questions about the things you read and watch online. The algorithms that run social media will only grow more sophisticated, and so, too, must our responses. If we don’t treat artificial intelligence with due caution and subject it to much-needed scientific and legislative rigor, we will find ourselves in a frightening new reality that we won’t be able to escape.

  • Outsmarting your smart devices

    The Internet of Things, a growing constellation of so-called “smart” devices such as doorbells, Internet-connected cameras, and smoke detectors, has long been criticized for its almost total absence of security. Smart device makers like TP-Link, Amazon, Google, and Wyze are no strangers to controversy when it comes to the sorry state of IoT security, yet the U.S. government hasn’t done much to enact legislation to protect the buyers of these devices. In 2025, the IoT device market is projected to bring in nearly $31 billion dollars in revenue for smart device manufacturers. As the market grows, so do opportunities for malicious actors to exploit weaknesses in IoT devices’ firmware, software, and the cloud infrastructure that lets users conveniently manage those devices from their PCs and smartphones. The statistics are grim: between January and June 2021, Kaspersky estimated that approximately 1.5 billion security breaches were carried out against these Internet-connected smart devices. Kaspersky further found that approximately 872 million, or 58% of the total number of breaches, were carried out with the intent to mine cryptocurrency on those devices. While each IoT device has fairly miniscule processing power, a network of a billion or more of these devices mining cryptocurrency or spreading malware is a formidable threat. While compromised smart home devices can expose sensitive data and cost consumers money, a much more critical threat exists in the medical IoT sector. In early 2022, Cynerio found that 53% of medical devices have a known critical vulnerability. These vulnerabilities often go unpatched by the medical device manufacturers and pose a serious threat to patient health and safety. Questions are also raised about an insurer’s willingness to cover a hospital or medical facility that implements devices which contain these vulnerabilities, unless mitigating actions are taken. Cynerio’s January 2022 report on the state of the medical Internet of Things also found that a majority of devices used in medicine (pharmacology, oncology, laboratory) are running versions of Windows older than Windows 10. This includes medical devices running versions of Windows as old as Windows XP, released in 2001. Microsoft ended all support for Windows XP in April 2014, but a significant number of expensive medical devices, such as X-rays, MRIs, and CAT scan machines, still rely on computers running the now 22-year-old operating system. Research by Palo Alto Networks in March 2020 found that 83% of these devices rely on unsupported operating systems, such as Windows XP and Windows 7. Hospitals are usually reticent to upgrade, even if unsupported software puts their patients at risk or jeopardizes their HIPAA compliance, because upgrading operating systems may mean upgrading expensive hardware. Given skyrocketing costs for the patient and the provider, hospitals find themselves in the unenviable position of having to choose between painting an ever-larger security target on their backs or spending millions of dollars on hardware and software upgrades. Unfortunately for hospitals and other medical facilities, refusing to upgrade can mean serious fines imposed by the federal government. Just last month, the Department of Health and Human Services reached a $75,000 settlement with Kentucky-based iHealth Solutions, provider of software and services for the medical sector, for violating HIPAA security and privacy laws. HHS determined that iHealth Solutions did not disclose or remediate weaknesses within its own network, leading to a data breach and release of patient records in 2017. For the healthcare industry, in particular, network security and compliance have become particularly thorny issues, given the requirements that federal and state laws set forth for the transmission and storage of patient data. The specter of compromised medical devices only adds to the pressure on hospitals to employ best security and networking practices, lock down devices, and deploy software and hardware from known vendors with a track record of supporting their products. For non-medical industries, the stakes may be lower but the importance of data security should still be top-of-mind for business owners. The Internet of Things is rapidly evolving and represents substantial added value to businesses who want to harness data to make better decisions; the devices we allow on our networks, however, must be managed and monitored to ensure they don’t become a liability. IoT security isn’t going to become easier as the segment grows, in our opinion. More devices on networks means more potential for exploits and theft of sensitive information. When we talk about data breaches in 2023, we don’t ask “if”, but “when”. This probably isn’t the optimistic take that business owners and IT managers want to hear, but threats are only becoming more complex. Data security is hard and requires an ongoing effort from your IT provider, as well as an ongoing commitment from you, the business owner–anything less can expose you to identity theft, fraud, regulatory fines, or even your business. Geek Housecalls and Geeks for Business offers free security consultations for home and business. Get in touch today for your free consultation and a detailed IT plan, tailored to your unique needs.

  • Managed storage as a defense against ransomware

    It’s getting harder to trust cloud storage providers --- In the wake of revelations that Western Digital’s internal IT resources were compromised by a malicious third-party hacker group, serious questions about the security and reliability of cloud storage services have again been raised. This isn’t the first time a cloud storage provider has been compromised, either; in 2015, health insurance provider Anthem experienced a massive data breach which exposed the personally-identifying information (PII) of around 80 million of its customers. In 2020, Anthem struck a deal with a group of State Attorneys General to pay a $39.5 million settlement relating to the hack. Anthem denied any wrongdoing. We’ve discussed data breaches in this blog before, but the frequency and severity with which they now occur has encouraged us to impress upon our clients the importance of managed storage services. Whether you pay for end-user cloud storage services like iCloud or Google Drive, your sensitive information is still in the cloud. Healthcare information, browser search histories, call logs, telemetry from smart devices and other medical devices, financial information, and more are all stored somewhere. Is that data encrypted? Are the companies who initially gathered your data even still in business? If they were acquired by another company, did the parent company inform you what they planned to do with your data? Has your data been sold to third parties without your knowledge or consent? These massive data troves, often poorly secured and subject to nebulous or nonexistent regulation, are valuable to both “legitimate” outfits like advertisers as well as nefarious entities, like black-hat hacking groups. Some of this data can’t easily be managed; when you consent to use a product or service, you’re entrusting your information to a corporate entity, or entities, who probably don’t have your best interests in mind. Even Microsoft sells your data. To a degree, you’re at the mercy of these tech giants’ End User License Agreements; these days, it’s all but impossible to go “off the grid” as far as data gathering is concerned. However, while the wheels of legislation may turn slowly, you can limit the amount of data you release to third-parties, by controlling much of the data that is personally significant to you (like family photos, videos, music, documents, and so on). This is where managed storage comes in. In spite of enticing promotional offers and rock-bottom pricing, it bears repeating that the cloud is just someone else’s computer. By using any cloud storage service, no matter the money and corporate pedigree behind it, you’re assuming that the provider has done their due diligence in securing your data. Surely, we can trust the likes of Google, Apple, and Microsoft to understand fundamental concepts of data security, right? Well, can we? It’s important to understand that services like iCloud and Google Drive are inexpensive for a reason. This isn’t to say that you’re getting a less secure product if you pay less money, rather that the services themselves are constantly in flux; Google might offer you 50GB of cloud storage for $1 per month, with a feature you like or use frequently (like integration with your Windows file system). From one month to the next, Google might decide that this feature you like is too costly or complex for them to continue supporting. A while later, that feature is gone. You’re still paying the same (or more) for the service, but the product has changed. Are you still getting what you paid for? To reiterate: you are at the mercy of the companies who manage these cloud storage platforms. This would be less of a concern if no one ever stored any sensitive data on these services, but people do just that–constantly. People also back up this sensitive data to one service and keep it nowhere else, placing total faith in the reliability and privacy of the cloud storage service they’ve chosen. This is a dangerous gamble, to say the least. No matter the scale or the cost, all cloud providers eventually suffer a failure. The failure may be catastrophic. You might lose everything. You probably won’t–historically, the largest cloud storage providers have been pretty good about keeping user data intact. But is this a bet you want to make, day in and day out, with irreplaceable data? Beyond just availability, do you trust your cloud provider to secure that data? Lost data is usually less devastating than stolen data, depending on what kind of data is being stored on a cloud storage service. To that end, Geek Housecalls/Geeks for Business is introducing our managed data service. While we have worked closely with preeminent cloud storage provider, Backblaze, for helping clients streamline their data backup solutions, we realize they are, for better or worse, another cloud storage service. We mean no disparagement toward Backblaze (because they’re great at what they do), but cloud storage just isn’t a bulletproof solution to store and secure the rapidly growing amounts of data that the average person generates. As such, we’d like to shine a light on the importance of onsite data storage: our managed data product involves an onsite hardware appliance (known as a Network Attached Storage server) and automation routines written by us, to ensure that your data is backed up on your schedule, securely. Envision a central storage device that pulls in data from each client device on your network (laptops, desktops, phones, smart devices such as security cameras). Now, envision a device that does this without user intervention or configuration. This is our “value-add”; we design and implement both the hardware and software, utilizing industry best practices and hardware from Synology. For business clients with more involved data storage requirements, we also work with TrueNAS Enterprise for custom storage server solutions. Network Attached Storage (NAS) has been around for decades, and serves an important purpose when we approach the thorny subjects of data security and data availability. This has held especially true for business use cases, but a logical data storage solution is increasingly critical in home environments, as well. NAS devices, such as those from Synology, are also highly extensible; your storage server can be extended with plugins for popular services like Apple AirPlay and Plex Media, in addition to allowing you to host local instances of password management software like BitWarden. Modern hardware has become fast and efficient enough to unlock a lot of possibilities here, all without needing to pay a cloud storage provider a monthly subscription fee. Most importantly, you retain control of your data, which is something no cloud provider can honestly claim. And finally, we don’t want to make it seem like the big cloud storage providers are bad or can’t be trusted–just that they shouldn’t be your first or only choice for data management. If your storage needs are greater than 50 or 100GB, costs can add up quickly. And it doesn’t really ever make sense to pay a princely sum to host “bulk data” (movies, TV shows, games, music) in the cloud, to begin with. Bulk data storage is much more economical when done on premises (your home or business) and gives you faster and more secure access to that data, as well. We regard solutions like Microsoft OneDrive, Google Drive, and Apple iCloud as good backups for your backups. That is, you should have multiple copies of important files, and those files can be stored in the cloud. We do strongly recommend encrypting important files, just in case your cloud provider is compromised, however. In the realm of data storage, the adage is “two is one and one is none” with respect to how many copies of your data you really need. Our recommendation is that our customers should have an onsite data backup, an offsite backup that only connects to the Internet to synchronize data with the onsite copy, as well as at least one cloud backup (iCloud, Google Drive, Backblaze, OneDrive, etc.) Additionally, cloud services like Google Drive can easily be integrated with onsite backup solutions. If you’d like to have all of your tax documents, for instance, be backed up to the cloud as soon as they’re backed up to your local NAS, that’s easily accomplished thanks to the extensibility of modern storage servers. One substantial benefit to having secure, managed backups is that, in the event of a ransomware or other malware attack, you have the ability to roll your systems back to known-good configurations, bypassing the need for expensive, complex malware remediation. Ransomware can’t thrive in an environment where secure data backups exist. This is something that Microsoft claims to offer with its OneDrive product, but such a claim is optimistic at best. Data snapshots, versioning, and offline backups go far beyond the scope of what OneDrive or any other consumer-grade cloud backup solution can hope to offer. Data management is only going to become more complex, and we think it’s time that everyone took control of their data destiny. Give Geek Housecalls and Geeks for Business a call (or email) today to set up a consultation for your storage and security needs.

  • Your laptop’s advertised battery life is a lie

    — Windows laptop manufacturers have dramatically overstated battery runtimes, for years With the sole exception of Apple, every single laptop manufacturer has lied, consistently, about their laptops’ battery life. As a study from Digital Trends found in 2017, Apple actually overstated the battery runtime of their MacBook Air 13”, claiming a 10-hour battery life while Digital Trends’ testing found the MacBook ran for 12 hours. In this instance, Apple is the only winner. Every manufacturer of Windows laptops was found to have overstated (if we’re being generous) or lied about (if we’re being honest) battery life. Dell was found to have overstated their battery life figures to the tune of 4 hours, while HP overstated battery runtimes by almost 5 hours. When probed for comment on these vast discrepancies, Dell told Digital Trends: “It’s difficult to give a specific battery life expectation that will directly correlate to all customer usage behaviors because every individual uses their PC differently,” Dell’s statement is a cop-out, at best; every Windows laptop manufacturer will no doubt provide a variation of the very same statement Dell gave on the subject of battery life. We can say, authoritatively, that their response to Digital Trends is a complete misdirection. What Dell fails to acknowledge is that battery life tests are generally synthetic and not based on real-world workloads (such as web browsing, video editing, or software development). Let’s dig into what exactly these synthetic battery life tests are and why they are absolutely worthless for providing any useful insight into a device’s real battery life. UL Solutions (Underwriters Laboratories), publisher of various computer benchmarking applications, offers what they describe as a standardized battery life test within their PCMark10 benchmark suite. The battery life testing portion of PCMark10 tests laptop battery life under three synthetic workloads: video, modern office, and gaming. Each workload test also generates a power usage report which indicates the wattage used by a given laptop under each synthetic workload. While this standard theoretically allows for independent reviewers and media outlets to test a laptop’s battery life and have directly comparable results to other laptops tested with PCMark10, the reality is murkier. Owing to battery runtime losses due to battery degradation, changes in runtimes due to firmware and operating system updates, as well as possible battery life fluctuations between operating system versions (eg: Windows 10 versus Windows 11), maintaining a database of reliable battery life numbers becomes a daunting task. This is made worse by the sheer, mind-boggling number of Windows laptops available on the market at any given time. To further complicate matters, some manufacturers sell the same laptop with smaller and larger battery capacity options. Lenovo, for example, offers two variants of their Thinkpad X13 Gen 3 laptop: one with a 41Wh (watt-hour) battery and one with a 54.7Wh battery. Naturally, the variant with the 54.7Wh battery will produce longer battery runtimes thanks to its larger capacity. The problem is Lenovo’s own website makes it difficult to know which version you’re buying; it’s usually assumed that preconfigured versions of Lenovo laptops with higher end specifications will also include a larger battery, if one is offered. Retailers likewise often fail to publish battery capacity information if none is made available by Lenovo, in this example. Other online computer review publications, like the seminal Notebookcheck.net, have created their own battery life tests, which are not standardized and can’t be directly compared to battery runtime numbers produced by other reviewers. In Notebookcheck’s case, however, their own test results are consistent and methodologies are applicable to old laptops, reviewed years ago. Elsewhere on the Internet, one benchmark may specify a 150-nit brightness setting with WiFi disabled for their battery test, while another may specify a 200-nit brightness setting with WiFi enabled. The manner in which battery runtimes are tested is not at all consistent across manufacturers and review outlets. This maelstrom of non-standardized data allows the laptop makers themselves to claim that there is no truth and that everyone’s experience with battery life will be different. We don’t encounter laptop makers understating the battery life of their laptops–why is that? Better to under-promise and over-deliver, isn’t it? Of course, if Dell, HP, Asus, or Lenovo admitted the average person might only see 4 hours of battery life, they’d sell fewer laptops. Apple, especially now that it produces its own silicon, is in the enviable position of offering the best battery life in the industry. Windows laptop manufacturers have taken note of this and are willing to beg, borrow, and steal their way to comparable energy efficiency in their own machines. This, realistically, is impossible: Intel and AMD cannot compete with the efficiency of Apple’s in-house silicon (though AMD is currently still ahead of Intel in this regard). Much of the battery life delta between Windows (x86) and Apple (ARM) is down to architectural differences between x86 and ARM. x86 has existed since 1978 and, while it has grown in complexity since, has not been fundamentally overhauled to address inherent inefficiencies. The x86 architecture contains legacy instruction sets that just aren’t relevant to modern computing workloads. However, this legacy cruft still takes up space on the processor’s die and requires power, whether it’s used or not. This agglomeration of inefficiencies within x86 has made it harder and harder for processor manufacturers (in this case, Intel and AMD) to produce the kinds of efficiency gains that Apple did by jettisoning x86 altogether. This is why Apple can understate its battery life and no one else can. As more people find that they can do most, if not all, of their work using web apps and cross-platform apps, Windows starts to become less of a necessity. As a consequence, Windows-based laptops with poor battery life become less of a necessity. Kids in schools across the country have found that they can do their assignments just as easily, if not more easily, on Chromebooks running Google’s ChromeOS, than they can on Windows laptops. Software developers and creative professionals in photography and graphic design have long chosen Apple laptops for their reliability and more focused software offerings. Maybe it’s time for users of other stripes to reevaluate whether Windows and x86 are all that compelling, anymore. MacBooks are expensive, though perhaps less so than you imagined: the price of entry hovers around $800 for an entry-level MacBook Air with 8GB of RAM and 256GB of storage. This represents a perfectly adequate specification for casual users and office workers. While $800 is appreciably more expensive than the low-end, $300 Windows machines you might find at Walmart, the differences in battery life, performance, build quality, and overall polish justify spending more. A MacBook will yield a much longer usable lifespan than a budget Windows laptop, as well. Unless a breakthrough happens in x86’s design efficiency, custom ARM silicon (like Apple’s M1 and M2 chips) seem to point the way forward for mobile computing. Regardless of your feelings about Apple’s closed ecosystem and its spendy hardware, Apple does deliver on its battery life promises. Let the PC makers squabble over 5 and 6-hour real world battery life–you’ve got other options. If you’re in need of a new device or a new fleet of devices for your business, get in touch with Geek Housecalls today and we’ll recommend the best device for your use case, at competitive pricing.

  • Companies are turning their backs on the cloud

    Rising costs and complexity are fostering a cloud repatriation — As of early 2023, a new trend is emerging in corporate IT: companies are increasingly turning their backs on cloud-hosted infrastructure and returning, at least partially, to on-premises infrastructure. High cloud hosting bills and the complexity of managing cloud spending are placing an untenable load on smaller companies who don’t have the manpower or resources to babysit their cloud balance sheets. In a survey by FoundryCo, 40% of survey respondents cited the need to keep cloud spending in check as a roadblock to further use of cloud services. When a company’s IT department is juggling cloud services from multiple cloud providers, this task becomes a growing headache. FinOps systems are available to help companies manage their cloud spending, but buy-in for such a system necessitates more spending, perhaps more hiring, and additional complexity within the organization. For larger companies, the burden of integrating a new systems management paradigm is easier to defray; for smaller organizations, however, implementing a FinOps system for cloud spending management could overwhelm existing budgets. So what are SMBs (small and medium businesses) to do in this situation? Cloud repatriation. In the age of rising cloud computing prices, pulling back cloud resources to on-premises infrastructure can represent substantial cost savings. In a 2022 survey by Anodot, 50% of IT executives reported that “it’s difficult to get cloud costs under control”. Further, Anodot’s survey found that: “over a third of participants (37%) have been surprised by their cloud costs; more than half (53%) say the key challenge is gaining true visibility into cloud usage and costs, while 50% said complex cloud pricing and 49% said complex, multi-cloud environments; more than one quarter (28%) of respondents said it takes weeks or months to notice a spike in cloud costs, a figure that has not improved over 2021.” When we consider that the majority of the cloud compute and storage market is dominated by three major players, Amazon Web Services, Microsoft, and Google, it becomes less surprising that pricing is not as transparent as it could or should be. A 2022 study from HashiCorp found that 94% of companies are wasting money on cloud technologies. In its fourth-annual multicloud survey, Virtana, a multicloud management platform vendor, found that, among its 350 respondents, 69% said cloud storage accounted for more than one quarter of their total cloud costs, and 23% of respondents said cloud storage spending accounted for more than half of their total cloud costs. Overall, Virtana found that 94% of the IT leaders it surveyed reported rising cloud storage costs and 54% reported spending on cloud storage was growing faster than their overall cloud costs. In addition to concerns about spending and waste, additional concerns in the realms of information security and performance have also entered into the equation for many organizations. With the parade of high-profile data breaches, the largest of which all having occurred after 2010, CIOs and other IT executives are facing increasing operational headwinds when working to secure their companies’ resources. The burden on smaller organizations is disproportionately larger, owing to smaller budgets and slower reaction times to incidents like ransomware attacks and proprietary data leaks. IBM reported in its 2022 Transformation Index that 54% of respondents felt that the public cloud is not secure enough. The challenges facing organizations who use the public cloud are only mounting, as datasets grow larger and the complexity of managing disparate data silos increases in kind. In 2020, data integration vendor Matillion surveyed 200 IT professionals on the issue of data integration and found that 90% of respondents cited lack of data insights as a barrier to growth. Rapid data growth and, consequently, the monumental tasks of management, archival, and security of that data have led to what can only be described as a data crisis at some organizations. Further complicating matters at the physical data storage level itself is the arms race between data storage demands and currently available storage technology. With advanced storage methodologies like HAMR (heat-assisted magnetic recording) soon to hit the market, it may seem like we’re managing to keep up with demands of global data centers. The issue, however, is complexity. As hard drive densities increase, along with the complexity of the physical mechanisms needed to grow their densities, management becomes more difficult. Higher capacity hard drives require tighter tolerances, more advanced engineering techniques, more complex storage controllers, and are more prone to encountering unrecoverable errors during storage array rebuilds. Smaller organizations can glean some useful information on the relative merits of “lifting-and-shifting” their IT infrastructure to the cloud, versus keeping their data and compute on premises. For smaller companies, the calculus for shifting IT infrastructure to the cloud often tilts in favor of utilizing the public cloud in low-demand scenarios, such as identity management, hosted office applications, and VoIP. However, even in lower demand scenarios, inflation in cloud pricing has led to 50% of organizations exceeding their budget for cloud storage spending. In February 2023, Google announced price increases in its Google Workspace Flexible Plans and in Google Workspace Enterprise Standard, but also announced new flexible pricing options for organizations looking to migrate their data to Google’s cloud platform. While pricing trends in cloud computing seem to be downward, cloud storage pricing is moving in the opposite direction; demand for cloud data storage is skyrocketing, forecast to grow around 25% year-over-year through 2028. Higher costs are a function of both this explosive demand in the storage sector and in ongoing supply chain snarls that have driven up semiconductor prices worldwide. When this trend will reverse is not clear–high year-over-year price increases in cloud storage are likely here to stay for the foreseeable future. If your business is struggling to get cloud compute and storage costs under control, Geeks for Business can help. Let us analyze your current cloud spending, your business’s compute and storage needs, as well as your forecasted growth, and we can help streamline complex multicloud environments while reducing ongoing costs. Every organization’s needs are unique: on-premises infrastructure isn’t (yet) a cure-all for high operating costs. Get in touch with Geeks for Business today and let’s get to work on getting your IT costs under control.

bottom of page