top of page
Connecting Dots

> Search Results

27 items found for ""

  • Beware of these emerging cyberthreats in 2024

    The global cost of a data breach last year was $4.45 million. This is an increase of 15% in three years. As we step into 2024, it's critical to be aware of emerging technology threats--threats that could potentially disrupt and harm your business. Technology is evolving at a rapid pace. It’s bringing new opportunities and challenges for businesses and individuals alike. Rapid developments in artificial intelligence (AI), machine learning (ML), and quantum computing are leading companies across all industries to radically reconsider their approach to cybersecurity and systems management. While these technologies are poised to make our lives easier, they're also being used to launch sophisticated, large-scale attacks against the networks and devices we depend on. In this article, we’ll highlight some emerging technology threats to be aware of in 2024 and beyond. Data Poisoning Attacks Data poisoning involves corrupting datasets used to train AI models. By injecting malicious data, attackers can skew algorithms' outcomes. This could lead to incorrect decisions in critical sectors like healthcare or finance. Some actions are vital in countering this insidious threat. These include protecting training data integrity and implementing robust validation mechanisms. Businesses should use AI-generated data cautiously. It should be heavily augmented by human intelligence and data from other sources. 5G Network Vulnerabilities The widespread adoption of 5G technology introduces new attack surfaces. With an increased number of connected devices, the attack vector broadens. IoT devices, reliant on 5G networks, might become targets for cyberattacks. Securing these devices and implementing strong network protocols is imperative. Especially to prevent large-scale attacks. Ensure your business has a robust mobile device management strategy. Mobile is taking over much of the workload Organizations should properly track and manage how these devices access business data. Quantum Computing Vulnerabilities Quantum computing, the herald of unprecedented computational power, also poses a threat. Its immense processing capabilities could crack currently secure encryption methods. Hackers might exploit this power to access sensitive data. This emphasizes the need for quantum-resistant encryption techniques to safeguard digital information. Artificial Intelligence (AI) Manipulation AI, while transformative, can (and is) being used to facilitate the spread of misinformation. Cyber criminals are already creating convincing deepfakes with AI, and automating phishing attacks. Vigilance is essential as AI-driven threats become more sophisticated. It demands robust detection mechanisms to discern genuine from malicious AI-generated content. Regulatory bodies and watchdog groups have proposed mandatory watermarks for AI generated content to make it easily discernible from human-generated (or human-reviewed) content. Augmented Reality (AR) and Virtual Reality (VR) Exploits AR and VR technologies offer immersive experiences. But they also present new vulnerabilities. Cybercriminals might exploit these platforms to deceive users, leading to real-world consequences. Ensuring the security of AR and VR applications is crucial. Especially to prevent user manipulation and privacy breaches. This is very true in sectors like gaming, education, and healthcare. Ransomware Evolves Ransomware attacks have evolved beyond simple data encryption. Threat actors now use double extortion tactics. They steal sensitive data before encrypting files. If victims refuse to pay, hackers leak or sell this data, causing reputational damage. Some defenses against this evolved ransomware threat include: Robust backup solutions Regular cybersecurity training Proactive threat hunting Supply Chain Attacks Persist Supply chain attacks remain a persistent threat. Cybercriminals infiltrate third-party vendors or software providers to compromise larger targets. Strengthening supply chain cybersecurity is critical in preventing cascading cyber incidents. Businesses can do this through rigorous vendor assessments, multi-factor authentication, and continuous monitoring. Biometric Data Vulnerability Biometric authentication methods, such as fingerprint or facial recognition, are becoming commonplace. But users can't change biometric data once compromised, like they can passwords. Protect biometric data through secure encryption. Ensure that service providers follow strict privacy regulations. These are paramount to preventing identity theft and fraud. Advanced Phishing Attacks Phishing attacks are one of the oldest and most common forms of cyberattacks. These attacks are becoming more sophisticated and targeted thanks to AI. For example, hackers customize spear phishing attacks to a specific individual or organization. Hackers do this based on online personal or professional information. Another example is vishing attacks. These use voice calls or voice assistants to impersonate legitimate entities, convincingly persuading victims to take certain actions. Ongoing employee phishing training is vital, as well as automated solutions to detect and defend against phishing threats. At Geeks for Business, we believe that a proactive approach to cybersecurity is critical. With our trusted cybersecurity partner, Huntress, we are able to hunt for threats within networks before they become breaches. With complexity in cyberattacks rising, reacting to an attack just isn't enough; our 24/7 managed endpoint detection and response approach allows us to go on the offense against prospective cyber criminals. Tips for Defending Against These Threats As technology evolves, so do the threats that we face. Thus, it’s important to be vigilant and proactive. Here are some tips that can help: Educate yourself and others about the latest technology threats. Use strong passwords and multi-factor authentication for all online accounts. Update your software and devices regularly to fix any security vulnerabilities. Avoid clicking on suspicious links or attachments in emails or messages. Verify the identity and legitimacy of any callers or senders. Do this before providing any information or taking any actions. Back up your data regularly to prevent data loss in case of a cyberattack. Invest in a reliable cyber insurance policy. One that covers your specific needs and risks. Report any suspicious or malicious activity to the relevant authorities. Need Help Ensuring Your Cybersecurity is Ready for 2024? Last year’s solutions might not be enough to protect against this year’s threats.  Don’t leave your security at risk. We help small and medium businesses throughout Central North Carolina manage their IT, reduce costs and complexity, expose vulnerabilities, and secure critical business assets. Reach out to Geeks for Business today to schedule a chat. Article used with permission from The Technology Press.

  • 8GB of RAM Just Isn't Enough

    When a company with the resources and reach of Apple still sells a base configuration MacBook with 8GB of RAM and 256GB of disk space in 2023, something has gone wrong. This isn’t to say other laptop makers aren’t guilty of the same: Lenovo, Dell, and HP, the Big Three names in Windows-based laptops, all offer laptop computers configured with 8GB of non-upgradable RAM. This practice is demonstrably harmful to prospective customers who aren’t technically savvy; in 2023, an instance of Google Chrome with 15 or 20 tabs open, plus the demands of Windows itself, and any other apps you might be running at the same time, can easily swallow up 8GB of memory. Even if you’re a light user and never have very many browser tabs open at once, consider the implications of soldered, non-upgradable RAM: you’re stuck with this configuration for the life of the computer. If your needs ever change, or your workload requires more memory, you’ll suddenly find yourself in the market for a new laptop. Apple is perhaps the most flagrant offender of all: a brand new M2-based 13” MacBook Air starts at $1099, and configuring this laptop with 16GB of RAM (which you should consider the bare minimum if you intend to keep the laptop for several years) increases the price to an astonishing $1299–that’s a $200 upcharge for another 8GB of RAM (market price for an 8GB DDR5 memory module, meanwhile, is approximately $23). Yes, Apple has to make a profit margin. Yes, Apple’s memory architecture is different from that of traditional x86-based laptops. Yes, Apple’s memory is “Unified Memory” and is super fast, but so what? Fast memory won’t let you run more programs, work with more resource-hungry software, or allow you to open more tabs in Chrome or Safari. Worse yet, if you want more storage space on your MacBook Air, it’s another $200. Suddenly, you’re considering a $1500 laptop, and you’ve spent $400 to ‘upgrade’ to a RAM and storage specification that has been standard among Windows laptops for several years, at lower prices. But if you want macOS, there are no alternatives: Apple makes the hardware, Apple makes the software, and Apple sets the prices. And there is truth in the idea that Apple users are loyal to Apple, that they expect a premium experience and so are willing to pay a premium price. Apple’s ecosystem and integration among its own products is second to none. All of these ease of use features and other Apple niceties have an intangible value; maybe $1500 for what many would now consider a ‘base’ spec laptop isn’t a bad deal, after all. This may all be true, but Apple’s dogged adherence to offering 8GB of RAM as standard in any laptop is absurd. I’m left to wonder how many college students, perhaps pursuing their degrees in computer science or engineering, unpacked their brand new MacBook Airs and were faced with dreaded Memory Pressure warnings from macOS upon loading up all of their applications. Even for office workers who deal mainly in cloud apps, 8GB of RAM can show its limitations quickly. Although Apple’s new memory and storage systems are both super fast, constantly writing to disk when out of RAM can shorten that disk’s lifespan considerably. These are fundamental concepts of computer science that Apple must surely be aware of. After all, Apple is selling a premium product at a premium price. Why offer an entry-level hardware configuration that reads like the spec sheet of a 2013 Windows PC? Not to mention the fact that maintaining two different product lines for the 8GB and 16GB system boards that go into new MacBook Airs must be an additional expense for Apple that they could otherwise eliminate. For a company as focused on the user experience rather than getting hung up on technical specs, Apple sure seems to be going out of its way to ignore the technical specs altogether. To this, Apple might say, “if you need more than 8GB of RAM, you’re a pro user and should buy the MacBook Pro, which is configurable with up to 96GB of RAM”. Certainly, you can buy a MacBook Pro with 96GB of RAM for a cool $4300. But let’s zoom out: just because someone needs 16GB of memory, does that make them a ‘pro’ user? Apple offers its M2 MacBook Airs with up to 24GB of RAM. This seems rather intentional to gently guide users into the loving embrace of pricey MacBook Pro 14s and 16s with pricey 32GB RAM configurations. Needing 16GB of memory doesn’t immediately imply that you’re a professional video or photo editor, or that you’re working in complex financial or scientific software with massive datasets and huge RAM requirements. Maybe you’re just a modern user with modern hardware needs. Is Apple this bad at reading the market? While Apple generally makes fantastic hardware with fabulously sleek integrations with other Apple devices, their ostensible perspective that obsessing over hardware specs is gauche or unseemly does their users a great disservice. In the modern era of JavaScript-heavy websites and memory-hungry applications, 8GB of RAM just doesn’t cut it for most people, whether they immediately realize it or not. Even if memory in MacBooks could still be user-upgraded, most users wouldn’t attempt the upgrade. Most people aren’t extremely technical; in technical circles, like those I travel in, it’s easy to start to imagine that anyone could or would want to take apart their computer and upgrade RAM or install a higher capacity SSD. In reality, the average person can’t be bothered. And who can blame them? Apple is losing touch with its core audience if it fails to evolve its MacBook configurations with the times. Seemingly gone are the days of Apple’s bold hardware adventures, extremely competitive hardware specifications, and focus on power users. This is the attitude that earned Apple much of the success they enjoy today. Now, Apple seems like a company coasting on mostly past, and the occasional current, innovation. Despite Apple’s deft concealment of its devices’ technical underpinnings from its users, those technical underpinnings still exist and still matter. The average user probably has no idea how much memory or storage capacity they need. They rely on Apple (or Apple’s Geniuses) to tell them what’s best. In spite of ubiquitous cloud storage options, local storage demands are still growing, as are memory requirements; surely, a company with the technical and engineering prowess of Apple can understand the importance of these things. To put it simply: 8GB of RAM in a premium product with a premium price tag is a contradiction. It’s akin to buying a Rolls-Royce Phantom with a 4-cylinder engine and a 5 gallon gas tank. Even though Apple customers and Rolls-Royce customers probably don’t spend much time gawking at spec sheets, the specs still matter. Sadly, across the board, upgradable memory in laptops is a phenomenon on the decline. Laptop makers’ claims that people just want thinner and thinner machines, at the expense of all else (a claim those companies are loath to prove), have led to the adoption of soldered RAM in an effort to slim down devices. But, in reality, such claims are likely nothing more than marketing ephemera–a gambit to upsell people into higher specced laptops at premium prices. In our opinion, this is more than just a sales tactic used by a stagnating industry to increase revenues–it’s another salvo in the war on Right to Repair and the very notion of actually owning what you buy.

  • The day the world went away

    – COVID exposed the frailty of just-in-time supply chains, and things aren’t getting better Since 2020, procurement in the tech sector has steadily become more difficult. Corporate purchasers and consumer buyers alike continue to grapple with both finished product and component shortages. Promises that broken supply chains fomented by the pandemic would be remedied by now have gone unfulfilled. Why hasn’t that product been released yet? Why are application-specific processors still so hard to find? Why are critical components needed for networking, industrial computing, IoT, and edge computing missing in action? The supply chain crisis was never really fixed and mainstream publications have backed away from covering the issue. During the first quarter of 2020, as lockdowns and quarantines began, consumer technology devices like laptops and webcams saw a massive surge in demand. This sudden surge in demand shocked our “just-in-time” supply chain model; that is, inventory is only moved as demand forecasts dictate. This logistical model saves retailers, warehouses, and manufacturers on storage costs, but eventuates supply-side bottlenecks when a supply or demand shock hits the system. Throughout 2021 and 2022, supply chain “disruptions” continued to ripple through the economy, punctuating just how fragile our just-in-time paradigm is. Data science has made just-in-time possible: advanced demand forecasting, automated order management, sophisticated consumer behavior analysis have not only enabled just-in-time logistics, but made it the de facto standard for the industrialized world. All the while, critical component shortages persisted, in spite of companies’ best PR efforts to hand-wave the mounting problems away. The Raspberry Pi Trading Company, famous for the wildly popular Raspberry Pi single-board computer, is still struggling to increase its manufacturing output in order to meet demand for its hardware. The issue: a shortage in one component needed to manufacture a Raspberry Pi brings production to a halt. WiFi chips, Bluetooth radios, USB-C ports, voltage regulators, rectifiers, Ethernet PHYs are all necessary to make a Raspberry Pi 4, for example. If WiFi chips are in short supply, the boards can’t be finished. Car manufacturers encountered a similar dilemma, with Ford unable to send finished vehicles to dealers due to the widely-publicized “chip” shortage. Untold thousands of vehicles sat idle while carmakers waited for the supply chain to unsnarl itself. Other cars were shipped to dealers, missing highly-touted “smart” features like Apple CarPlay; if Ford, GM, and others had suspended sales of every vehicle that didn’t have every chip it needed, they would have had to contend with massive financial losses. As demand for pickup trucks and luxury vehicles increased, car-buyers were faced with purchasing an unfinished product, all thanks to the just-in-time supply chain. The snarled supply chain likewise drove lumber prices through the roof in 2021, a price shock from which the market still hasn’t fully recovered. Coupled with very hot inflation rates, sticker shock across a broad swath of industries became normalized. Just-in-time was still broken, even as the pandemic receded and government-imposed economic restrictions were lifted. By early 2022, the true scope of our supply chain wreckage was coming into view. With the headwinds of inflation, personal debt, bankruptcy, and business failures during COVID, the back of the supply chain had been broken. Shortages were going to improve, we were assured, as things got back to normal. “As things get back to normal”, a soothing refrain for a world shaken by a very real global supply chain failure. Now, in the latter half of 2023, just when things will return to normal remains unknown. As those who work in corporate IT procurement will attest, things have not gotten better. Hardware launches have been postponed while prices have risen dramatically in both end-user products such as laptops and in components. Once-commonplace corporate IT devices, like managed Ethernet switches, have seen persistent shortages while manufacturers of those devices have had to scramble to substitute new chips for the old chips they can’t easily source anymore. These chip substitutions constitute a problem all their own–due to the availability of component-level hardware, like NAND flash controllers, solid state disks have been especially vulnerable to unannounced hardware “updates”, which usually lead to reduced performance and reliability. As unscrupulous Asian manufacturers seize this opportunity to flood the market with low quality knock off chips and other hardware, companies around the world are being forced to accept a new reality where the old supply chain is gone and isn’t coming back. Where we used to be able to promise a reliable product at a certain price, we now have to accept substitute goods that may not meet the performance, feature, or reliability characteristics of the original product. Worse yet, the replacement product may not meet the clients’ needs, either. The sudden shock to our tightly-wound supply chain has given rise to a cascading failure of epic proportions. We are in uncharted territory, with no historical precedent or context to inform our next move. The biggest tech companies like Apple and Amazon have weathered the storm, relatively unscathed, owing to their massive liquidity. For the smaller (but still big) players in tech, the news isn’t so rosy. For the smallest players, such as managed service providers, small software developers, game studios, and system integrators, the state of affairs is bleak. A company with Amazon’s scale can simply tell a Chinese factory, “I’ll pay you $100 million to manufacture these chips, to my specifications” and the factory will take their money, throw out every other bid, and make the chips. The biggest players get first pick of the world’s limited high-tech manufacturing capacity. Everyone else gets to fight over the scraps. In chipmaking, bringing a new factory online can cost billions of dollars, take years to complete, and typically requires huge amounts of raw materials, including fresh water. Intel can speak to the difficulty and expense in semiconductor fabrication, as their recent missteps have shown. Of course, this assumes a complex chip design (such as the x86-based processors most of us rely on in our own computers); a simpler chip design, such as ARM or RISC, is less costly to manufacture and can be reworked to efficiently suit a wider range of applications more easily than x86. Even so, manufacturing enough ARM SoCs (systems-on-chip) for their single-board computers has also proven difficult for The Raspberry Pi Trading Company. There is no safe haven in the world of just-in-time logistics. It is possible, or perhaps even likely, that the deterioration of our gossamer supply chain will lead to a resurgence in local and more resilient manufacturing. If we have any hope of sustaining ourselves, we have to move the point of manufacture closer to its audience. Shipping a laptop battery from China to Maine, for example, is just demonstrably stupid. Growing populations and growing demand for technology will only shine a more unkind light on our current way of moving inventory around the world. Maybe just-in-time is peak capitalism and we needed to see it destroyed before our very eyes to appreciate just what’s so wrong with it. In a world that is both tech-obsessed and contending with serious environmental and resource-related challenges, reducing both the complexity of manufacturing and the impact of manufacturing are only net positives.

  • Algorithm-directed content has made reality a hall of mirrors

    How much content is organic? YouTube content creators sometimes break their own fourth-wall by admitting (often for comic relief) that the algorithm made them do it. YouTube’s algorithm is poorly understood, perhaps even by the Google engineers who develop it; the average YouTube viewer has no clue how the algorithm works, or why they see recommendations for videos that seemingly have nothing to do with their watch histories. To the outside observer, the algorithm is mostly a black box. Content creators, however, have at least some insight into what the algorithm is doing: by analyzing the performance metrics of their past video uploads, they can start to infer which kinds of content resonate with the algorithm. One might say, “the algorithm isn’t to blame–viewers decide what they like and don’t like”, but this misses so much of what the algorithm itself is doing, behind the scenes. In a 2018 interview, software engineer Guillaume Chaslot spoke on his time at YouTube, telling The Guardian, “YouTube is something that looks like reality, but it is distorted to make you spend more time online. The recommendation algorithm is not optimizing for what is truthful, or balanced, or healthy for democracy.” Chaslot explained that YouTube’s algorithm is always in flux, with different inputs given different weights over time. “Watch time was the priority. Everything else was considered a distraction”, he told The Guardian. YouTube’s algorithm behaves in ways that appear outwardly bizarre: recommending videos from channels with 4 subscribers, videos with zero views, or videos from spam and affiliate link clickfarm channels. To an uninitiated third-party, YouTube’s algorithm seems a bit obtuse, unpredictable, mercurial; perhaps the algorithm is working as intended. The algorithm, not the audience, increasingly directs the kinds of content that uploaders make. This is assuming, of course, that the uploader wants to monetize their content and collect ad revenue from high-performing videos. A channel you subscribe to that normally uploads home repair videos may decide to upload a new kind of video, where the content creator travels across the country to interview a master carpenter about his trade. This video, while very well-received among the channel’s audience, has a fraction of the total watch hours and much less “engagement” than usual. This results in lower ad revenue for the creator, but also results in the video being shown to fewer people outside the channel’s existing audience. The content was good, but the reception was poor. Was the reception poor because people just decided the video was a bit too long? Not relevant enough to home repair? Was Mercury perhaps in retrograde? No one really knows, even the channel owner. The algorithm has decided, based on its unknowable metrics, that this video is a bad video, and it won’t be going out of its way to promote it on the front page of YouTube. As a result, that content creator, despite his love of the video he made and the subject matter, will mothball his plans to create more videos in his “Interviews with a Master Carpenter” series because the money just isn’t there. Conceivably, the first video performed poorly just by virtue of its newness, not due to any intrinsic flaw in the content. Maybe subsequent videos in the series would have done better. It doesn’t matter, though, because the algorithm has all but whispered to the channel owner, “don’t make this kind of video again”. Consequently, the content creator returns to his normal fare and watch hours, views, likes, and “engagement” go back up. On a broad scale, this behavior would seem to have a chilling effect on speech itself. Algorithms and machine learning models are playing an outsize role in what kinds of content people make and what kinds of things we see. Is important work being taken out back and shot because the algorithm has concluded, based on historical data, that it won’t perform well? Or is the algorithm, even by chance, ensuring that videos it thinks are overly critical of a brand or company, or just generally problematic, won’t be seen again? Artificial intelligence doesn’t have the ability to be conscientious–it lacks self-awareness. All the same, human provocateurs can and do put their desires and agendas into these algorithms, giving AI the power to selectively dismiss and bury content that it’s been instructed not to like. It’s important not to conflate this weaponization of AI with simple clickbait. While clickbait is problematic in its own right, it serves as more of an incentive for publishing certain kinds of content, rather than disincentivizing the creator from making certain kinds of content. And make no mistake: algorithmic activism is weaponization of artificial intelligence. As we fall further down the trash-compactor-chute of tech dystopia, we need to remember that the same companies who wield AI as a cudgel against certain content were the same companies who heralded AI as a net positive to humanity. Google’s former motto was “don’t be evil”--they’ve since removed that utterance from their corporate code of conduct, as if Google needed to make it any more obvious that they are evil. Regardless, there is no “if” AI becomes a weapon because it already is one. The question is how bad will we, and our elected representatives, allow things to get before we place hard limits on AI’s scope in the public domain. Print media hasn’t escaped the problem, either. Print, if anything, is a vestigial organ. Print media follows whatever is in fashion in digital media, as it may not surprise you to learn. So, when you read your local newspaper and ask, “why is every story about a shooting or a new tech company promising to hire a whole bunch of people?”, remember the algorithm. The same algorithm that tells YouTube content creators to be very careful about publishing unpopular material. The algorithm itself defines popularity, so you’ll never really know if the videos it deep-sixes are resonating with people or not; there are no guardrails, no systems of checks-and-balances, and you can’t interrogate the Black Box. When we factor in the use of AI to write articles, product reviews, and other content, we find we really do exist in an algorithmic hall of mirrors. How can we discern which videos or articles to trust if the entire game is rigged? The quest tech companies have embarked on with AI is fairly straightforward: do whatever is necessary to beat the opponent’s AI model into submission. Then, the AI that emerges victorious gets to decide reality for the rest of us. Consider deepfakes–what if, in the future, an anti-establishment YouTube creator is framed for murder using an AI-generated confession that the man himself never uttered? This isn’t a cheesy Blade Runner-style screenplay pitch, it’s already happening. Deepfakes have already necessitated using AI to sniff out phony and doctored content. The Massachusetts Institute of Technology is currently developing a deepfake detection experiment, which has already been peer-reviewed by the Proceedings of the National Academy of Sciences of the United States of America. The new space race between large-language models in AI will only intensify, and our reality will be dragged along with it. Truepic has also developed a system to authenticate videos and images, aiming to guarantee the authenticity of digital master copies, thereby preventing the fraud and abuse that deepfakes engender. Deepfakes also promise to make corporate espionage easier, and cyberattacks harder to prevent, thanks to the sophisticated phishing attacks the technology facilitates. Because of AI-generated deepfakes, requirements for companies to obtain cybersecurity insurance will no doubt become more strigent. Cybersecurity insurers already require out-of-band-authentication (OOBA) as a defense against deepfake or impersonation-based attacks against clients; these authentication strategies are only one piece of the puzzle in mitigating these emerging deepfake threats, however. Additional software tools, authentication factors, user training, and the use of advanced AI technologies will become a necessary component in enterprises protecting their employees and clients from malevolent AI attacks. The aim of this post isn’t so much to make you doubt your experiences, but rather to encourage you to ask some pointed questions about the things you read and watch online. The algorithms that run social media will only grow more sophisticated, and so, too, must our responses. If we don’t treat artificial intelligence with due caution and subject it to much-needed scientific and legislative rigor, we will find ourselves in a frightening new reality that we won’t be able to escape.

  • Outsmarting your smart devices

    The Internet of Things, a growing constellation of so-called “smart” devices such as doorbells, Internet-connected cameras, and smoke detectors, has long been criticized for its almost total absence of security. Smart device makers like TP-Link, Amazon, Google, and Wyze are no strangers to controversy when it comes to the sorry state of IoT security, yet the U.S. government hasn’t done much to enact legislation to protect the buyers of these devices. In 2025, the IoT device market is projected to bring in nearly $31 billion dollars in revenue for smart device manufacturers. As the market grows, so do opportunities for malicious actors to exploit weaknesses in IoT devices’ firmware, software, and the cloud infrastructure that lets users conveniently manage those devices from their PCs and smartphones. The statistics are grim: between January and June 2021, Kaspersky estimated that approximately 1.5 billion security breaches were carried out against these Internet-connected smart devices. Kaspersky further found that approximately 872 million, or 58% of the total number of breaches, were carried out with the intent to mine cryptocurrency on those devices. While each IoT device has fairly miniscule processing power, a network of a billion or more of these devices mining cryptocurrency or spreading malware is a formidable threat. While compromised smart home devices can expose sensitive data and cost consumers money, a much more critical threat exists in the medical IoT sector. In early 2022, Cynerio found that 53% of medical devices have a known critical vulnerability. These vulnerabilities often go unpatched by the medical device manufacturers and pose a serious threat to patient health and safety. Questions are also raised about an insurer’s willingness to cover a hospital or medical facility that implements devices which contain these vulnerabilities, unless mitigating actions are taken. Cynerio’s January 2022 report on the state of the medical Internet of Things also found that a majority of devices used in medicine (pharmacology, oncology, laboratory) are running versions of Windows older than Windows 10. This includes medical devices running versions of Windows as old as Windows XP, released in 2001. Microsoft ended all support for Windows XP in April 2014, but a significant number of expensive medical devices, such as X-rays, MRIs, and CAT scan machines, still rely on computers running the now 22-year-old operating system. Research by Palo Alto Networks in March 2020 found that 83% of these devices rely on unsupported operating systems, such as Windows XP and Windows 7. Hospitals are usually reticent to upgrade, even if unsupported software puts their patients at risk or jeopardizes their HIPAA compliance, because upgrading operating systems may mean upgrading expensive hardware. Given skyrocketing costs for the patient and the provider, hospitals find themselves in the unenviable position of having to choose between painting an ever-larger security target on their backs or spending millions of dollars on hardware and software upgrades. Unfortunately for hospitals and other medical facilities, refusing to upgrade can mean serious fines imposed by the federal government. Just last month, the Department of Health and Human Services reached a $75,000 settlement with Kentucky-based iHealth Solutions, provider of software and services for the medical sector, for violating HIPAA security and privacy laws. HHS determined that iHealth Solutions did not disclose or remediate weaknesses within its own network, leading to a data breach and release of patient records in 2017. For the healthcare industry, in particular, network security and compliance have become particularly thorny issues, given the requirements that federal and state laws set forth for the transmission and storage of patient data. The specter of compromised medical devices only adds to the pressure on hospitals to employ best security and networking practices, lock down devices, and deploy software and hardware from known vendors with a track record of supporting their products. For non-medical industries, the stakes may be lower but the importance of data security should still be top-of-mind for business owners. The Internet of Things is rapidly evolving and represents substantial added value to businesses who want to harness data to make better decisions; the devices we allow on our networks, however, must be managed and monitored to ensure they don’t become a liability. IoT security isn’t going to become easier as the segment grows, in our opinion. More devices on networks means more potential for exploits and theft of sensitive information. When we talk about data breaches in 2023, we don’t ask “if”, but “when”. This probably isn’t the optimistic take that business owners and IT managers want to hear, but threats are only becoming more complex. Data security is hard and requires an ongoing effort from your IT provider, as well as an ongoing commitment from you, the business owner–anything less can expose you to identity theft, fraud, regulatory fines, or even your business. Geek Housecalls and Geeks for Business offers free security consultations for home and business. Get in touch today for your free consultation and a detailed IT plan, tailored to your unique needs.

  • Managed storage as a defense against ransomware

    It’s getting harder to trust cloud storage providers --- In the wake of revelations that Western Digital’s internal IT resources were compromised by a malicious third-party hacker group, serious questions about the security and reliability of cloud storage services have again been raised. This isn’t the first time a cloud storage provider has been compromised, either; in 2015, health insurance provider Anthem experienced a massive data breach which exposed the personally-identifying information (PII) of around 80 million of its customers. In 2020, Anthem struck a deal with a group of State Attorneys General to pay a $39.5 million settlement relating to the hack. Anthem denied any wrongdoing. We’ve discussed data breaches in this blog before, but the frequency and severity with which they now occur has encouraged us to impress upon our clients the importance of managed storage services. Whether you pay for end-user cloud storage services like iCloud or Google Drive, your sensitive information is still in the cloud. Healthcare information, browser search histories, call logs, telemetry from smart devices and other medical devices, financial information, and more are all stored somewhere. Is that data encrypted? Are the companies who initially gathered your data even still in business? If they were acquired by another company, did the parent company inform you what they planned to do with your data? Has your data been sold to third parties without your knowledge or consent? These massive data troves, often poorly secured and subject to nebulous or nonexistent regulation, are valuable to both “legitimate” outfits like advertisers as well as nefarious entities, like black-hat hacking groups. Some of this data can’t easily be managed; when you consent to use a product or service, you’re entrusting your information to a corporate entity, or entities, who probably don’t have your best interests in mind. Even Microsoft sells your data. To a degree, you’re at the mercy of these tech giants’ End User License Agreements; these days, it’s all but impossible to go “off the grid” as far as data gathering is concerned. However, while the wheels of legislation may turn slowly, you can limit the amount of data you release to third-parties, by controlling much of the data that is personally significant to you (like family photos, videos, music, documents, and so on). This is where managed storage comes in. In spite of enticing promotional offers and rock-bottom pricing, it bears repeating that the cloud is just someone else’s computer. By using any cloud storage service, no matter the money and corporate pedigree behind it, you’re assuming that the provider has done their due diligence in securing your data. Surely, we can trust the likes of Google, Apple, and Microsoft to understand fundamental concepts of data security, right? Well, can we? It’s important to understand that services like iCloud and Google Drive are inexpensive for a reason. This isn’t to say that you’re getting a less secure product if you pay less money, rather that the services themselves are constantly in flux; Google might offer you 50GB of cloud storage for $1 per month, with a feature you like or use frequently (like integration with your Windows file system). From one month to the next, Google might decide that this feature you like is too costly or complex for them to continue supporting. A while later, that feature is gone. You’re still paying the same (or more) for the service, but the product has changed. Are you still getting what you paid for? To reiterate: you are at the mercy of the companies who manage these cloud storage platforms. This would be less of a concern if no one ever stored any sensitive data on these services, but people do just that–constantly. People also back up this sensitive data to one service and keep it nowhere else, placing total faith in the reliability and privacy of the cloud storage service they’ve chosen. This is a dangerous gamble, to say the least. No matter the scale or the cost, all cloud providers eventually suffer a failure. The failure may be catastrophic. You might lose everything. You probably won’t–historically, the largest cloud storage providers have been pretty good about keeping user data intact. But is this a bet you want to make, day in and day out, with irreplaceable data? Beyond just availability, do you trust your cloud provider to secure that data? Lost data is usually less devastating than stolen data, depending on what kind of data is being stored on a cloud storage service. To that end, Geek Housecalls/Geeks for Business is introducing our managed data service. While we have worked closely with preeminent cloud storage provider, Backblaze, for helping clients streamline their data backup solutions, we realize they are, for better or worse, another cloud storage service. We mean no disparagement toward Backblaze (because they’re great at what they do), but cloud storage just isn’t a bulletproof solution to store and secure the rapidly growing amounts of data that the average person generates. As such, we’d like to shine a light on the importance of onsite data storage: our managed data product involves an onsite hardware appliance (known as a Network Attached Storage server) and automation routines written by us, to ensure that your data is backed up on your schedule, securely. Envision a central storage device that pulls in data from each client device on your network (laptops, desktops, phones, smart devices such as security cameras). Now, envision a device that does this without user intervention or configuration. This is our “value-add”; we design and implement both the hardware and software, utilizing industry best practices and hardware from Synology. For business clients with more involved data storage requirements, we also work with TrueNAS Enterprise for custom storage server solutions. Network Attached Storage (NAS) has been around for decades, and serves an important purpose when we approach the thorny subjects of data security and data availability. This has held especially true for business use cases, but a logical data storage solution is increasingly critical in home environments, as well. NAS devices, such as those from Synology, are also highly extensible; your storage server can be extended with plugins for popular services like Apple AirPlay and Plex Media, in addition to allowing you to host local instances of password management software like BitWarden. Modern hardware has become fast and efficient enough to unlock a lot of possibilities here, all without needing to pay a cloud storage provider a monthly subscription fee. Most importantly, you retain control of your data, which is something no cloud provider can honestly claim. And finally, we don’t want to make it seem like the big cloud storage providers are bad or can’t be trusted–just that they shouldn’t be your first or only choice for data management. If your storage needs are greater than 50 or 100GB, costs can add up quickly. And it doesn’t really ever make sense to pay a princely sum to host “bulk data” (movies, TV shows, games, music) in the cloud, to begin with. Bulk data storage is much more economical when done on premises (your home or business) and gives you faster and more secure access to that data, as well. We regard solutions like Microsoft OneDrive, Google Drive, and Apple iCloud as good backups for your backups. That is, you should have multiple copies of important files, and those files can be stored in the cloud. We do strongly recommend encrypting important files, just in case your cloud provider is compromised, however. In the realm of data storage, the adage is “two is one and one is none” with respect to how many copies of your data you really need. Our recommendation is that our customers should have an onsite data backup, an offsite backup that only connects to the Internet to synchronize data with the onsite copy, as well as at least one cloud backup (iCloud, Google Drive, Backblaze, OneDrive, etc.) Additionally, cloud services like Google Drive can easily be integrated with onsite backup solutions. If you’d like to have all of your tax documents, for instance, be backed up to the cloud as soon as they’re backed up to your local NAS, that’s easily accomplished thanks to the extensibility of modern storage servers. One substantial benefit to having secure, managed backups is that, in the event of a ransomware or other malware attack, you have the ability to roll your systems back to known-good configurations, bypassing the need for expensive, complex malware remediation. Ransomware can’t thrive in an environment where secure data backups exist. This is something that Microsoft claims to offer with its OneDrive product, but such a claim is optimistic at best. Data snapshots, versioning, and offline backups go far beyond the scope of what OneDrive or any other consumer-grade cloud backup solution can hope to offer. Data management is only going to become more complex, and we think it’s time that everyone took control of their data destiny. Give Geek Housecalls and Geeks for Business a call (or email) today to set up a consultation for your storage and security needs.

  • Your laptop’s advertised battery life is a lie

    — Windows laptop manufacturers have dramatically overstated battery runtimes, for years With the sole exception of Apple, every single laptop manufacturer has lied, consistently, about their laptops’ battery life. As a study from Digital Trends found in 2017, Apple actually overstated the battery runtime of their MacBook Air 13”, claiming a 10-hour battery life while Digital Trends’ testing found the MacBook ran for 12 hours. In this instance, Apple is the only winner. Every manufacturer of Windows laptops was found to have overstated (if we’re being generous) or lied about (if we’re being honest) battery life. Dell was found to have overstated their battery life figures to the tune of 4 hours, while HP overstated battery runtimes by almost 5 hours. When probed for comment on these vast discrepancies, Dell told Digital Trends: “It’s difficult to give a specific battery life expectation that will directly correlate to all customer usage behaviors because every individual uses their PC differently,” Dell’s statement is a cop-out, at best; every Windows laptop manufacturer will no doubt provide a variation of the very same statement Dell gave on the subject of battery life. We can say, authoritatively, that their response to Digital Trends is a complete misdirection. What Dell fails to acknowledge is that battery life tests are generally synthetic and not based on real-world workloads (such as web browsing, video editing, or software development). Let’s dig into what exactly these synthetic battery life tests are and why they are absolutely worthless for providing any useful insight into a device’s real battery life. UL Solutions (Underwriters Laboratories), publisher of various computer benchmarking applications, offers what they describe as a standardized battery life test within their PCMark10 benchmark suite. The battery life testing portion of PCMark10 tests laptop battery life under three synthetic workloads: video, modern office, and gaming. Each workload test also generates a power usage report which indicates the wattage used by a given laptop under each synthetic workload. While this standard theoretically allows for independent reviewers and media outlets to test a laptop’s battery life and have directly comparable results to other laptops tested with PCMark10, the reality is murkier. Owing to battery runtime losses due to battery degradation, changes in runtimes due to firmware and operating system updates, as well as possible battery life fluctuations between operating system versions (eg: Windows 10 versus Windows 11), maintaining a database of reliable battery life numbers becomes a daunting task. This is made worse by the sheer, mind-boggling number of Windows laptops available on the market at any given time. To further complicate matters, some manufacturers sell the same laptop with smaller and larger battery capacity options. Lenovo, for example, offers two variants of their Thinkpad X13 Gen 3 laptop: one with a 41Wh (watt-hour) battery and one with a 54.7Wh battery. Naturally, the variant with the 54.7Wh battery will produce longer battery runtimes thanks to its larger capacity. The problem is Lenovo’s own website makes it difficult to know which version you’re buying; it’s usually assumed that preconfigured versions of Lenovo laptops with higher end specifications will also include a larger battery, if one is offered. Retailers likewise often fail to publish battery capacity information if none is made available by Lenovo, in this example. Other online computer review publications, like the seminal Notebookcheck.net, have created their own battery life tests, which are not standardized and can’t be directly compared to battery runtime numbers produced by other reviewers. In Notebookcheck’s case, however, their own test results are consistent and methodologies are applicable to old laptops, reviewed years ago. Elsewhere on the Internet, one benchmark may specify a 150-nit brightness setting with WiFi disabled for their battery test, while another may specify a 200-nit brightness setting with WiFi enabled. The manner in which battery runtimes are tested is not at all consistent across manufacturers and review outlets. This maelstrom of non-standardized data allows the laptop makers themselves to claim that there is no truth and that everyone’s experience with battery life will be different. We don’t encounter laptop makers understating the battery life of their laptops–why is that? Better to under-promise and over-deliver, isn’t it? Of course, if Dell, HP, Asus, or Lenovo admitted the average person might only see 4 hours of battery life, they’d sell fewer laptops. Apple, especially now that it produces its own silicon, is in the enviable position of offering the best battery life in the industry. Windows laptop manufacturers have taken note of this and are willing to beg, borrow, and steal their way to comparable energy efficiency in their own machines. This, realistically, is impossible: Intel and AMD cannot compete with the efficiency of Apple’s in-house silicon (though AMD is currently still ahead of Intel in this regard). Much of the battery life delta between Windows (x86) and Apple (ARM) is down to architectural differences between x86 and ARM. x86 has existed since 1978 and, while it has grown in complexity since, has not been fundamentally overhauled to address inherent inefficiencies. The x86 architecture contains legacy instruction sets that just aren’t relevant to modern computing workloads. However, this legacy cruft still takes up space on the processor’s die and requires power, whether it’s used or not. This agglomeration of inefficiencies within x86 has made it harder and harder for processor manufacturers (in this case, Intel and AMD) to produce the kinds of efficiency gains that Apple did by jettisoning x86 altogether. This is why Apple can understate its battery life and no one else can. As more people find that they can do most, if not all, of their work using web apps and cross-platform apps, Windows starts to become less of a necessity. As a consequence, Windows-based laptops with poor battery life become less of a necessity. Kids in schools across the country have found that they can do their assignments just as easily, if not more easily, on Chromebooks running Google’s ChromeOS, than they can on Windows laptops. Software developers and creative professionals in photography and graphic design have long chosen Apple laptops for their reliability and more focused software offerings. Maybe it’s time for users of other stripes to reevaluate whether Windows and x86 are all that compelling, anymore. MacBooks are expensive, though perhaps less so than you imagined: the price of entry hovers around $800 for an entry-level MacBook Air with 8GB of RAM and 256GB of storage. This represents a perfectly adequate specification for casual users and office workers. While $800 is appreciably more expensive than the low-end, $300 Windows machines you might find at Walmart, the differences in battery life, performance, build quality, and overall polish justify spending more. A MacBook will yield a much longer usable lifespan than a budget Windows laptop, as well. Unless a breakthrough happens in x86’s design efficiency, custom ARM silicon (like Apple’s M1 and M2 chips) seem to point the way forward for mobile computing. Regardless of your feelings about Apple’s closed ecosystem and its spendy hardware, Apple does deliver on its battery life promises. Let the PC makers squabble over 5 and 6-hour real world battery life–you’ve got other options. If you’re in need of a new device or a new fleet of devices for your business, get in touch with Geek Housecalls today and we’ll recommend the best device for your use case, at competitive pricing.

  • Companies are turning their backs on the cloud

    Rising costs and complexity are fostering a cloud repatriation — As of early 2023, a new trend is emerging in corporate IT: companies are increasingly turning their backs on cloud-hosted infrastructure and returning, at least partially, to on-premises infrastructure. High cloud hosting bills and the complexity of managing cloud spending are placing an untenable load on smaller companies who don’t have the manpower or resources to babysit their cloud balance sheets. In a survey by FoundryCo, 40% of survey respondents cited the need to keep cloud spending in check as a roadblock to further use of cloud services. When a company’s IT department is juggling cloud services from multiple cloud providers, this task becomes a growing headache. FinOps systems are available to help companies manage their cloud spending, but buy-in for such a system necessitates more spending, perhaps more hiring, and additional complexity within the organization. For larger companies, the burden of integrating a new systems management paradigm is easier to defray; for smaller organizations, however, implementing a FinOps system for cloud spending management could overwhelm existing budgets. So what are SMBs (small and medium businesses) to do in this situation? Cloud repatriation. In the age of rising cloud computing prices, pulling back cloud resources to on-premises infrastructure can represent substantial cost savings. In a 2022 survey by Anodot, 50% of IT executives reported that “it’s difficult to get cloud costs under control”. Further, Anodot’s survey found that: “over a third of participants (37%) have been surprised by their cloud costs; more than half (53%) say the key challenge is gaining true visibility into cloud usage and costs, while 50% said complex cloud pricing and 49% said complex, multi-cloud environments; more than one quarter (28%) of respondents said it takes weeks or months to notice a spike in cloud costs, a figure that has not improved over 2021.” When we consider that the majority of the cloud compute and storage market is dominated by three major players, Amazon Web Services, Microsoft, and Google, it becomes less surprising that pricing is not as transparent as it could or should be. A 2022 study from HashiCorp found that 94% of companies are wasting money on cloud technologies. In its fourth-annual multicloud survey, Virtana, a multicloud management platform vendor, found that, among its 350 respondents, 69% said cloud storage accounted for more than one quarter of their total cloud costs, and 23% of respondents said cloud storage spending accounted for more than half of their total cloud costs. Overall, Virtana found that 94% of the IT leaders it surveyed reported rising cloud storage costs and 54% reported spending on cloud storage was growing faster than their overall cloud costs. In addition to concerns about spending and waste, additional concerns in the realms of information security and performance have also entered into the equation for many organizations. With the parade of high-profile data breaches, the largest of which all having occurred after 2010, CIOs and other IT executives are facing increasing operational headwinds when working to secure their companies’ resources. The burden on smaller organizations is disproportionately larger, owing to smaller budgets and slower reaction times to incidents like ransomware attacks and proprietary data leaks. IBM reported in its 2022 Transformation Index that 54% of respondents felt that the public cloud is not secure enough. The challenges facing organizations who use the public cloud are only mounting, as datasets grow larger and the complexity of managing disparate data silos increases in kind. In 2020, data integration vendor Matillion surveyed 200 IT professionals on the issue of data integration and found that 90% of respondents cited lack of data insights as a barrier to growth. Rapid data growth and, consequently, the monumental tasks of management, archival, and security of that data have led to what can only be described as a data crisis at some organizations. Further complicating matters at the physical data storage level itself is the arms race between data storage demands and currently available storage technology. With advanced storage methodologies like HAMR (heat-assisted magnetic recording) soon to hit the market, it may seem like we’re managing to keep up with demands of global data centers. The issue, however, is complexity. As hard drive densities increase, along with the complexity of the physical mechanisms needed to grow their densities, management becomes more difficult. Higher capacity hard drives require tighter tolerances, more advanced engineering techniques, more complex storage controllers, and are more prone to encountering unrecoverable errors during storage array rebuilds. Smaller organizations can glean some useful information on the relative merits of “lifting-and-shifting” their IT infrastructure to the cloud, versus keeping their data and compute on premises. For smaller companies, the calculus for shifting IT infrastructure to the cloud often tilts in favor of utilizing the public cloud in low-demand scenarios, such as identity management, hosted office applications, and VoIP. However, even in lower demand scenarios, inflation in cloud pricing has led to 50% of organizations exceeding their budget for cloud storage spending. In February 2023, Google announced price increases in its Google Workspace Flexible Plans and in Google Workspace Enterprise Standard, but also announced new flexible pricing options for organizations looking to migrate their data to Google’s cloud platform. While pricing trends in cloud computing seem to be downward, cloud storage pricing is moving in the opposite direction; demand for cloud data storage is skyrocketing, forecast to grow around 25% year-over-year through 2028. Higher costs are a function of both this explosive demand in the storage sector and in ongoing supply chain snarls that have driven up semiconductor prices worldwide. When this trend will reverse is not clear–high year-over-year price increases in cloud storage are likely here to stay for the foreseeable future. If your business is struggling to get cloud compute and storage costs under control, Geeks for Business can help. Let us analyze your current cloud spending, your business’s compute and storage needs, as well as your forecasted growth, and we can help streamline complex multicloud environments while reducing ongoing costs. Every organization’s needs are unique: on-premises infrastructure isn’t (yet) a cure-all for high operating costs. Get in touch with Geeks for Business today and let’s get to work on getting your IT costs under control.

  • Telcos, with the help of the FCC, are dropping copper and replacing it with nothing

    — In August 2022, the FCC handed down Order 19-72, titled “FCC Grants Relief From Outdated, Burdensome Phone Industry Regulations”. Order 19-72 empowers telecommunications companies to discontinue support for and maintenance of their existing copper-based communications networks in favor of broad fiber-based network upgrades. That legacy copper networks would be upgraded to modern fiber networks is a drum telecom giants like AT&T have been beating since the 1990s. In practice, however, these upgrades still haven’t materialized for millions of households–particularly those in rural areas–because of a deeply ingrained anti-competitive spirit within the telecom industry. The Telecommunications Act of 1996, which is in fact an amendment of the Communications Act of 1934, sought a path to force incumbent telecommunications companies to upgrade their networks. However, as is the case with most modern legislation passed in the United States, this particular bill was largely toothless. Lacking the political power or regulatory authority to make incumbent telcos like AT&T do much of anything, the Telecommunications Act went largely ignored. Over the ensuing decades, incumbent telecom companies like AT&T, GTE, Sprint, Verizon, and Bell South have wrung their hands, dragged their feet, litigated, lobbied, and done everything in their power to delay performing this copper-to-fiber network transition. Worse, the same telcos have for years let their existing copper infrastructure languish and decay, citing “prohibitive” maintenance and upkeep costs. In 2015, AT&T reluctantly struck a deal with the federal government to receive $428 million in subsidies to provide a minimum of 10Mbps downstream Internet service to rural areas of the country. AT&T claimed that it shouldn’t have to provide anything better than the FCC’s existing broadband standard of 4Mbps downstream, 1Mbps upstream. Considering AT&T has long enjoyed a virtual protected monopoly thanks to ongoing subsidies from the federal government and special regulatory protections from state governments, their intransigence on performing basic network maintenance and upgrades is especially vexing. After the 1982 dissolution of AT&T’s historic “Bell System” and the partitioning of the Bell system into RBOCs (regional bell operating companies), the so-called “Baby Bell” companies were slowly reabsorbed into large, national telecommunications companies, thereby establishing new monopolies and effectively rendering the original intent of the 1982 Bell breakup completely moot. In a truly absurd show of the U.S. government’s apparent lack of regulatory authority, AT&T itself purchased BellSouth, which it had divested as an RBOC only 24 years earlier, for the sum of $86 billion in 2006. Now, these same legacy telecommunications companies, like AT&T, enjoy the same monopoly privilege that they were forced to divest over only about 40 years ago. AT&T abuses its monopoly position in broad daylight, as was evidenced by their paying a $23 million settlement to resolve a federal criminal investigation brought by the Department of Justice. In 2017, AT&T Illinois President Paul La Schiazza conspired with Speaker Michael J. Madigan, a friend of Madigan, and “other parties” to arrange a payment of $22,500 in order to essentially buy votes that AT&T found agreeable with its agenda (in this case, to step away from its copper landline obligations in Illinois). Returning to FCC Order 19-72, we see that the federal government continues to not only fail to pass or enforce effective regulatory measures on historically highly-regulated telecom companies, but that the FCC, under former Chairman Ajit Pai’s stewardship, actually went out of its way to facilitate these telecom companies in achieving their own goals. Citing “red tape”, Chairman Ajit Pai trotted out the oft-deployed Republican rhetorical device that regulation is what’s holding companies back from necessary innovation and investment, and that deregulation would solve such broad infrastructure problems as lack of telecom service in rural communities. In spite of ongoing subsidies from the federal government to telecom companies, these upgrades have a mysterious way of never, or rarely, materializing. We need look no further than the failed effort to break up AT&T’s monopoly in 1982 if we want a case study in the almost comical failure of deregulation to achieve the ends its proponents promise. As it stands now, in 2023, the FCC order to allow telcos to walk away from copper is in force, with no real roadmap for replacing copper in low income communities and rural areas, in particular. Of equal importance, there seems to have been little consideration by the FCC of the technical limitations of fiber-based networks as far as landline phone functionality is concerned. Copper-based phone systems carry passive voltage throughout the system, around 48 volts DC when the phone handset is on the hook. This passive voltage allows standard phone handsets, which are not externally powered, to remain online during a power outage. While this may not seem like an issue of major importance in the era of nearly-ubiquitous cell phone ownership, it is an issue which has not been properly addressed by fiber-based Voice-over-IP (VoIP) phone systems. Smartphones rely on batteries for power and on cell tower backup generators for network connectivity, and can’t guarantee reliable 911 or emergency service connectivity during an extended power outage, so placing our faith in smartphones to bridge the gap that telco companies’ neglect has caused is not a very effective solution. During a power outage, VoIP systems will go offline unless an appropriate battery backup system has been installed; such a backup must be of sufficient capacity to keep both an Optical Network Terminal (ONT), the fiber modem, router, or gateway, and any VoIP handsets connected during an extended power outage. Power outages in the United States, as it happens, are also becoming more frequent due to an increase in ‘extreme’ weather events and neglected infrastructure. As a consequence, backup battery systems that promise 8 hours of runtime should not be considered appropriate or sufficient. How many excess deaths can we expect thanks to monopolistic telecom companies and their allies in state and federal government allowing them to just walk away from their legal obligation to provide service? In an extended power outage event, the lack of access to a copper phone line can be a matter of life and death; alarm systems and other security devices also rely on copper phone line connectivity, in addition to traditional landline phones. The current patchwork of state-level regulation regarding copper line access only complicates the matter. VoIP service of any description, whether purchased from your Internet service provider or not, relies on effective backup power solutions at every point in the network: your home, the fiber cabinet, and the local fiber hub, depending on how widespread a given power outage is. Telcos, of course, will fight tooth and nail to spend the least money possible on ensuring continuity of service, as they have demonstrated over decades of grift, lobbying, and fighting basic, common-sense legislation–legislation that might just keep people online and alive during an emergency. While we don’t agree with the shifting of monetary burdens onto customers for the failures of telcos, Geek Housecalls can help you implement a phone system that will keep you connected in a power outage or other emergency. We have extensive experience in VoIP phone systems and in specifying battery backup systems for installations of any size. Get in touch with us today to discuss your phone system needs.

  • Microsoft mistreats its captive audience

    In the Before Times, the Windows operating system was sold on physical media: first, floppy diskettes, then CDs, then DVDs. When you purchased a physical copy of Windows (which until the mid-2000s was the only way to purchase it), you also purchased a license to use that copy of Windows. This license, while subject to Microsoft’s End User Licensing Agreement, was basically immutable; you bought a physical item and you owned a tangible good. In the years since Windows 7, Microsoft has labored tirelessly to turn Windows from a piece of software you bought every few years into a service that you rent. While Microsoft won’t say as much, the intention is to turn Windows into a subscription, like so many other products in the modern tech sphere. Starting with Windows 11 in 2021, an active Internet connection and Microsoft Account are now mandatory in order to complete Windows setup on a new PC (or on an existing PC being upgraded from Windows 10). While there were methods to bypass this requirement in earlier builds of Windows 11, Microsoft has since focused on eliminating such workarounds. The long and short of it is: Microsoft is dedicated to killing the Local Account, whether you like it or not. Windows 11 Pro, it must be noted, is exempted from this mandate (for now) and offers users an “I don’t have Internet” option during setup. Windows 11 Home users are not so favorably considered. While there remain a couple of ways to fool Windows 11 Home into letting the end user create a Local Account, it’s impossible to know for how much longer those tricks will work. The first way to fool Windows is to disable your Internet connection during setup, which is easy if your PC is plugged into an Ethernet cable (just pull the cable!), but becomes more of a chore if your PC connects strictly over WiFi; some users report pressing Alt+F4 when they reach the “Let’s get you connected” screen works to bypass this requirement, while others report it doesn’t. At this point, the use of the Command Prompt usually becomes necessary, requiring users to test various unsavory methods found on Google just to create a Local Account. None of this is to proclaim that Windows users shouldn’t be given the option to use a Microsoft account if they want, but rather that an option should be provided at all. The current paradigm within Big Tech of automatically opting unwitting users into things they don’t want or understand is reflected in Microsoft’s efforts to pretend Local Accounts never existed. And the strangest part of it all is that this behavior from Microsoft is so expected and mundane that the tech media has come to largely stop reporting on it. Rather than collectively hold Microsoft’s feet to the flame for their abuse of a largely captive audience, we’ve instead agreed to just suffer through it. Microsoft, being no stranger to antitrust lawsuits, has to imagine that they are walking on thin ice as they implement more anti-user and anti-competitive features that allow them to abuse their monopoly desktop operating system position. While Microsoft may hope to avoid further antitrust litigation, it has recently made itself a target by bidding to acquire video game giant Activision for an astounding $69 billion. Now, Microsoft’s legal team might, with some success, argue that it no longer has a monopoly in the operating system space. Rivals Apple and Linux-based operating systems do exist, and are suitable replacements to Windows for a minority of current Windows users. Whether Microsoft acknowledges it or not, it has a largely captive audience in its Windows user-base. When one considers that Microsoft Windows and Microsoft Windows Server are staples of the vast majority of corporate and government institutions, it becomes clear that just dumping Windows for Apple’s macOS or a variant of Linux is not only unlikely, it’s absurd. While Microsoft’s Windows hegemony has diminished from a lofty 92.02% of the total market in January 2011 to 74.14% in January 2023, it’s obvious that Windows is still king. Decades of backroom deals with other tech giants like Intel and vendor lock-in set Microsoft up for perpetual success, with little incentive to actually improve its products. While Linux might run the world, with the open source kernel powering operating systems that run on 96% of the world’s top one million web servers, Microsoft Windows is still an 800-pound gorilla in both end-user operating systems and corporate IT. Where does this leave users who need Windows because their software doesn’t run on anything else? Stuck. It doesn’t matter if Windows is the best solution, or the most affordable, or the easiest to integrate with an existing tech stack; if your program doesn’t have a Linux or macOS equivalent, you have no recourse. Happily, this forced marriage to Windows has begun to wane with the rise of cloud applications, which run in a web browser and are operating system agnostic. Adobe, once considered a Windows-only developer, now offers its Creative Cloud suite of applications which is available for both Windows and macOS (though, sadly, not Linux-based operating systems like Ubuntu). Microsoft’s own cash-cow productivity suite, Microsoft 365, is also available in web app form, which extends its reach to both macOS and Linux users. Microsoft even now claims to embrace the Linux kernel, having implemented Windows Subsystem for Linux in Windows 10 back in 2016. Such trends would indicate that Microsoft wants to distance itself from its checkered past as a monopolist, an industry bully. Why, then, does Microsoft continue to do its determined best to make Windows more stifling, more restrictive, and less usable with every subsequent release? The answer is perhaps not a surprising one: Microsoft wants an ecosystem. Apple, its largest rival in the desktop operating system space, has an ecosystem, and profits handsomely from tightly controlling the hardware, software, and services its users rely on. Having failed miserably in its foray into smartphones and being battered in the developer and server space by open source, Microsoft appears to be relenting and grudgingly embracing open source. Forgive my cynicism when I say “grudgingly”, as it stems from Microsoft’s public and private dismissal of open source software over the last three-and-a-half decades. Under current CEO Satya Nadella’s leadership, the culture at Microsoft does seem to have changed for the better, relative to the ‘bad old days’ under Steve Ballmer’s and Bill Gates’ tutelage. Attitudes toward open source at Microsoft in its Azure, IoT, and services divisions may well have shifted positively, but Windows remains a puzzling contradiction to those pro-open source shifts. We must consider that the Windows NT kernel is a sprawling, complex mess that few Microsoft engineers even understand, and that rewriting the Windows kernel would be a truly Herculean undertaking. Having said that, nothing changes if nothing changes. The momentum Microsoft has accrued in its other divisions must be applied to its Windows product or the operating system is destined to wither, adding to its market share losses since the turn of the century. Ultimately, we must consider Microsoft's new idea for its operating system now: to sell ad space. Microsoft's enduring obsession with injecting ads directly into Windows has miffed more than a few of its users, but it continues to toy with the idea, sometimes boldly, sometimes with more timidity. Application "stubs" for the likes of Instagram, Hulu, ClipChamp, and Disney+ litter the Start menu of a freshly-installed version of Windows 10 or Windows 11 now, thanks to undoubtedly lucrative partnerships that Microsoft has forged with these companies. These stubs are easily uninstalled and serve as more an annoyance than a privacy or security issue, since they aren't full applications, but they foretold a more ominous advertising future for Windows: Microsoft's current push to bake non-removable OneDrive and other first-party ads directly into the operating system itself. The sentiment toward these practices may be negative, but most people won't throw out their Windows PC because of it. Perhaps the larger issue at play isn't what Microsoft, Apple, or Google are doing with their products, but rather advertising itself. Technology and advertising have become so interwoven that the only way to keep ads from eating reality itself is effective legislation and regulation. We'll discuss the state of digital advertising, its future, and its unintended consequences in a later post.

  • Why consumer security products are bad

    With LastPass’s implosion and private equity firms buying up the tech sector, what’s to be done? LastPass, McAfee, Twitter, Optus, WhatsApp, Uber, Shein–you may be familiar with some of these companies, many of whom are now trying to shed the ignominy of rather self-inflicted poor reputations. Most recently, LastPass experienced a data breach in which its executive leadership has now admitted led to the exfiltration of users’ encrypted password files. More egregious than that, LastPass offers no remedy or advice to those affected users. How could a tech company of LastPass’s size get away with such obvious negligence? Private equity. In 2019, LogMeIn, parent company of LastPass, agreed to sell itself to two private equity firms to the tune of $4.3 billion. Understandably, there was concern among tech insiders and LastPass users that the sale would change LastPass’s business model; generally, private equity firms exist solely to purchase assets and maximize the value of those assets for a later sale. In LogMeIn’s case, this was very much the trajectory. Francisco Partners and Elliott Management were the two private equity firms which bought LogMeIn in 2019. Shortly thereafter, in 2020, LastPass announced it was raising prices. If users wanted access to their LastPass-managed passwords on more than one device, they’d have to pony up another $36 a year. This killed the free option previously offered by LastPass, which represented a significant portion of the company’s user base. Of course, this is expected behavior during a private equity takeover–raise prices, increase shareholder value. The downside, predictably, is that the increased costs that you pay are not reinvested in the company. In fact, one could imagine that LastPass took resources away from its security teams in wake of their sale to private equity interests. That would neatly explain why the platform was hacked multiple times in 2022 alone. As we covered previously, the password vaults that were stolen during these data breaches were and hopefully are still encrypted. But that doesn’t matter in the context of LastPass. Would you trust a company that didn’t follow basic security protocols to use the strongest available encryption for your data? Given the priorities of LogMeIn’s buyers, it seems far-fetched to believe that those password vaults won’t at some point be decrypted by the very same enterprising hackers who stole them. Handing a safe and a set of lockpicking tools to a master lockpicker doesn’t mean you won’t lose everything just because he doesn’t have the combination. This trend is a worrying one: the companies who promise to secure your data, monitor your credit, protect your identity, and keep your devices safe are increasingly selling out to the highest bidder, throwing security out the window, and exposing you to risks never before seen. We’ve entered into an age of consumer security chaos. Companies like McAfee preload their obnoxious, resource-intensive, and often ineffectual software onto millions of computers each year. For the record, McAfee, despite being an alleged security-focused company, has not been immune to such security breaches. In 2017, McAfee’s own network sent out malware that directed customers of its ClickProtect software to download a malware-infested Word document. In 2019, popular VPN service provider NordVPN revealed it had been hacked, although the scale and severity of this particular breach were much less significant than in other high-profile incidents over the last several years. Despite the lower profile of NordVPN’s breach, the incident demonstrated a means by which a cloud service provider (CSP) could be compromised on a larger scale. As threats have evolved, consumer security products, such as McAfee’s antivirus platform and other consumer-oriented security services, have not kept up. Zero-day threats, or threats which are known to bad actors before they’re known to software and hardware vendors, are often blind spots for these consumer-focused security solutions. In Q4 2021, 66% of malware attacks implemented zero-day vulnerabilities. These so-called zero-day vulnerabilities by definition can’t be detected by antimalware engines because definitions for them don’t exist until after the vulnerabilities are exploited. Therefore, security products use heuristics and machine learning models to detect such exploits as quickly as possible. While software teams are patching code and releasing antivirus signature updates, these heuristically-driven antimalware engines can, in theory, protect critical systems from zero-day attacks. In practice, however, 80 zero-day vulnerabilities were exploited in 2021–the highest number since monitoring of zero-days began. It must be noted that the majority of zero-day attacks originate from state-sponsored hacking groups, with the remainder largely from financially-motivated hackers. Zero-days are not trivial and are rarely carried out by lone actors. Consequently, the targets of zero-day attacks are usually governments, large corporations, or other valuable assets, rather than individual people. However, individuals are still affected by zero-days, nonetheless: in 2020, Citrix saw remote access vulnerability attacks increase by an astonishing 2066%. As threats grow ever more sophisticated, protecting yourself from them requires a more sophisticated defense. Installing antivirus software just isn’t enough anymore, in spite of what big antivirus vendors like McAfee, Norton, Kaspersky, and Avast would have you believe. These companies spend hundreds of millions of dollars per year, combined, to convince you that you need their ‘ultimate security’ package, which could cost you hundreds of dollars per year, at dubious benefit to your actual security. Security researchers increasingly think that antivirus software is redundant and unnecessary, considering that modern threats have shifted away from software viruses to ransomware, phishing, and complex social engineering attacks. Naturally, the companies who make antivirus software are incensed at the notion that you don’t need them anymore. To make up for this inevitable loss in revenue, security software vendors have taken to including mostly-worthless features like VPNs, identity theft monitoring, and other baubles as “value adds” in their security software. To their credit, it has been effective as the average computer user doesn’t know why they don’t need these things and fear-based marketing has always been a powerful tool in the tech space. The bottom line for customers, both individuals and businesses, is that their security solution must involve a multi-pronged approach: antimalware service (eg: Windows Defender), an effective ransomware protection and remediation strategy (eg: offsite data backups), a password manager that is audited and has been shown to be secure (eg: BitWarden), logical network segregation (eg: isolating more insecure devices onto their own networks), and device patching (eg: keeping all of your software and firmware up to date). Don’t go it alone–Geek Housecalls and Geeks for Business are here to help you navigate the ever-changing security landscape.

  • Smart-device company Eufy admits major camera security breach

    Another day, another breach; which companies can you trust? As 2022 came to a close, yet another security breach was disclosed (albeit only partially) by a large smart device manufacturer. On November 23, 2022, a security researcher and YouTuber named Paul Moore uploaded a video claiming that Eufy devices were sending photos and videos to the cloud, despite Eufy’s insistence that all photo and video analysis was performed by the devices themselves, rather than in the cloud. Moore’s research alleged that user images and facial recognition data were (and still are) being uploaded to an AWS server either maintained by Anker (parent company of Eufy) or Eufy themselves. Since late November, Moore has updated his initial findings, telling us that some of these security issues have been patched, although with no way to verify Anker’s and Eufy’s claims that previously-stored cloud data are actually being deleted. As the saga unfolded, The Verge uncovered that unencrypted camera streams from Eufy cameras could be accessed by video playback software such as VLC. The Verge questioned an Anker representative about this security vulnerability, to which Brett White, a senior PR manager at Anker, responded that it was not possible to initiate an unencrypted stream to a Eufy camera from a third-party video client like VLC. The Verge tested this claim and found that Anker lied: “But The Verge can now confirm that’s not true. This week, we repeatedly watched live footage from two of our own Eufy cameras using that very same VLC media player, from across the United States — proving that Anker has a way to bypass encryption and access these supposedly secure cameras through the cloud.” The Verge continues that, in spite of this disturbing revelation, there is no evidence that the vulnerability has been exploited in the wild. However, the method for connecting to cameras over the Internet involves simply knowing the cameras’ serial numbers, which are 16-digit codes encoded in Base64; decoding these addresses with freely available online calculators is trivial. This “unique” address also consists of a Unix timestamp, which is easily created by an attacker, as well as some sort of identity token which Eufy’s servers don’t appear to actually validate. Finally, a four-digit random hex code is needed to complete this process; such codes can also be easily brute-forced, as only 65,536 possible combinations of four-digit hex exist. Anker’s responses to these findings and allegations have varied from half-hearted admissions, to flat denials (even when faced with very damning evidence), to mere radio silence. Paul Moore, who initially broke the story, has initiated a lawsuit against Eufy and parent company Anker, as he alleges their security policies have violated GDPR regulations in Europe (Moore lives in the United Kingdom). But it gets worse. A few weeks after the initial story broke, The Verge wrote a follow-up article to their original investigative piece. They claim that Eufy has not answered any of their questions about these vulnerabilities, instead opting to remove the ten so-called “privacy promises” from their website that had been publicly visible on Eufy’s site until December 8th, 2022. The ten bullet points in Eufy’s now-defunct “privacy promise” follow: “To start, we’re taking every step imaginable to ensure your data remains private, with you.” “[Y]our recorded footage will be kept private. Stored locally. With military-grade encryption. And transmitted to you, and only you.” “Here at eufy, we’re not just all talk and no action.” “With secure local storage, your private data never leaves the safety of your home, and is accessible by you alone.” “All recorded footage is encrypted on-device and sent straight to your phone—and only you have the key to decrypt and watch the footage. Data during transmission is encrypted.” “There is no online link available to any video.” “You need to use Eufy software and your account to decrypt the clips for viewing. No one else can access or read this data.” “For Your Eyes Only” “Peeking Prohibited” “Everything In-House” It doesn’t seem surprising, then, that Eufy memory-holed these items from its website, given the potential legal ramifications of leaving them up, especially assuming that Eufy is already undergoing investigation from government regulators in other countries. But it does speak to the casual attitude that so many companies take toward data security and user privacy. Many of Eufy’s sins in this case come down to failure to implement basic information security best practices. These things are well-understood, well-documented, and increasingly given to regulatory scrutiny if not implemented properly. So what should you do? If you own any Eufy products, get rid of them. Anker and Eufy have amply demonstrated how they feel about their users’ security, given their responses to the very fair questions the media and security consultants have asked them. But this is not an isolated problem. Eufy is not the first smart device maker to have an unflattering spotlight shined on their information security practices. Wyze has been similarly implicated in failing to address security vulnerabilities in their security camera platform. Wyze also suffered a large data breach in 2019 when it failed to secure customer databases stored in the cloud. This leaves prospective customers with something of a dilemma: do I buy nothing or buy the least-bad option? When you use a device maker’s cloud services you’re agreeing to a certain amount of risk. A cloud server is just someone else’s computer, after all. When you buy an Internet-connected camera and pay for cloud video storage, AI recognition, and streaming, you’re implicitly trusting that device manufacturer to do the right thing and to be rigorous in their security policies. The only way to reduce this risk is to keep your smart devices off the Internet, which rather reduces their effectiveness, especially in the case of security cameras. Your best path forward consists of a multi-part strategy which I encourage everyone who has an interest in (or owns) smart devices to employ: Vet your suppliers: research the companies you’re buying your products from. Have they had data breaches or other security incidents? How did they respond to them? Has there been more than one such incident? Segregate your network traffic: don’t put smart devices on the same network your computers use. VLANs (virtual networks) suit this purpose perfectly. By creating a special network just for your smart devices, you ensure that they cannot communicate with sensitive data on your main home network. A compromised IoT device can wreak havoc, but the damage will be minimized if that device can’t talk to the rest of your network. Don’t rely solely on cloud solutions: companies like Eufy and Wyze want you to buy their cloud subscription services for things like video storage, as these are their cash cows, offsetting the often low price you pay for their hardware. These cloud storage and authentication solutions, as we’ve seen so many times before, are often breached, leak your data, or are otherwise compromised due to poor security practices and unpatched software and firmware vulnerabilities. Look for cameras that support local video recording by way of a microSD card or other local storage device. Use your firewall: create rules to prevent your cameras and other IoT devices from “phoning home” to remote servers that you don’t recognize, or that these devices needn’t otherwise communicate with. When performing a traffic analysis of these devices, you’ll often find they routinely ping a number of cloud servers (often Amazon Web Services servers). In the case of cloud-connected devices which need access to remote servers in order to upload photos and videos, this is expected and necessary behavior, but what about if you have no cloud subscriptions? What if the server it’s trying to access isn’t a legitimate server? When you consider the Chinese government’s hostility toward Western countries like ours, and that most of these devices are made in China with China-backed servers, these are important questions to ask. Retain a network engineer, if necessary, to perform an actual traffic analysis for you, to ensure that your devices aren’t actually making your home less secure. Finally, there is always the option of building a local-only security solution that doesn’t connect to the Internet at all. In this way, you’ll find your options in terms of hardware and software are numerous compared to the closed ecosystems that most IoT and smart device companies are selling you. Again, though, an offline security solution isn’t much use to a customer who wants to check in on their home, pets, or loved ones when they’re out of town, at work, or on an errand. However, you still have options to convert an offline system to a more secure online system by using our favorite smart device aggregation platform, HomeAssistant. HomeAssistant requires considerably more manual configuration and setup than any plug-and-play smart device, but it rewards users with greater security and emphasizes local control rather than handing your information to unknown, possibly malicious, third parties. Geek Housecalls has experience in installing and configuring HomeAssistant for home users, as well as businesses, with security foremost in mind. Contact us today for a free consultation!

bottom of page