top of page

> Search Results

53 results found with an empty search

Blog Posts (32)

  • Beware of these emerging cyberthreats in 2024

    The global cost of a data breach last year was $4.45 million . This is an increase of 15% in three years. As we step into 2024, it's critical to be aware of emerging technology threats--threats that could potentially disrupt and harm your business. Technology is evolving at a rapid pace. It’s bringing new opportunities and challenges for businesses and individuals alike. Rapid developments in artificial intelligence (AI), machine learning (ML), and quantum computing are leading companies across all industries to radically reconsider their approach to cybersecurity and systems management. While these technologies are poised to make our lives easier, they're also being used to launch sophisticated, large-scale attacks against the networks and devices we depend on. In this article, we’ll highlight some emerging technology threats to be aware of in 2024 and beyond. Data Poisoning Attacks Data poisoning involves corrupting datasets used to train AI models. By injecting malicious data, attackers can skew algorithms' outcomes. This could lead to incorrect decisions in critical sectors like healthcare or finance. Some actions are vital in countering this insidious threat. These include protecting training data integrity and implementing robust validation mechanisms. Businesses should use AI-generated data cautiously. It should be heavily augmented by human intelligence and data from other sources. 5G Network Vulnerabilities The widespread adoption of 5G technology introduces new attack surfaces. With an increased number of connected devices, the attack vector broadens. IoT devices, reliant on 5G networks, might become targets for cyberattacks. Securing these devices and implementing strong network protocols is imperative. Especially to prevent large-scale attacks. Ensure your business has a robust mobile device management strategy. Mobile is taking over much of the workload Organizations should properly track and manage how these devices access business data. Quantum Computing Vulnerabilities Quantum computing, the herald of unprecedented computational power, also poses a threat. Its immense processing capabilities could crack currently secure encryption methods. Hackers might exploit this power to access sensitive data. This emphasizes the need for quantum-resistant encryption techniques to safeguard digital information. Artificial Intelligence (AI) Manipulation AI, while transformative, can (and is) being used to facilitate the spread of misinformation. Cyber criminals are already creating convincing deepfakes with AI, and automating phishing attacks. Vigilance is essential as AI-driven threats become more sophisticated. It demands robust detection mechanisms to discern genuine from malicious AI-generated content. Regulatory bodies and watchdog groups have proposed mandatory watermarks for AI generated content to make it easily discernible from human-generated (or human-reviewed) content. Augmented Reality (AR) and Virtual Reality (VR) Exploits AR and VR technologies offer immersive experiences. But they also present new vulnerabilities. Cybercriminals might exploit these platforms to deceive users, leading to real-world consequences.  Ensuring the security of AR and VR applications is crucial. Especially to prevent user manipulation and privacy breaches. This is very true in sectors like gaming, education, and healthcare. Ransomware Evolves Ransomware attacks have evolved beyond simple data encryption. Threat actors now use double extortion tactics. They steal sensitive data before encrypting files. If victims refuse to pay, hackers leak or sell this data, causing reputational damage.   Some defenses against this evolved ransomware threat include: Robust backup solutions Regular cybersecurity training Proactive threat hunting  Supply Chain Attacks Persist Supply chain attacks remain a persistent threat. Cybercriminals infiltrate third-party vendors or software providers to compromise larger targets. Strengthening supply chain cybersecurity is critical in preventing cascading cyber incidents. Businesses can do this through rigorous vendor assessments, multi-factor authentication, and continuous monitoring. Biometric Data Vulnerability Biometric authentication methods, such as fingerprint or facial recognition, are becoming commonplace. But users can't change biometric data once compromised, like they can passwords. Protect biometric data through secure encryption. Ensure that service providers follow strict privacy regulations. These are paramount to preventing identity theft and fraud. Advanced Phishing Attacks Phishing attacks are one of the oldest and most common forms of cyberattacks. These attacks are becoming more sophisticated and targeted thanks to AI. For example, hackers customize spear phishing attacks to a specific individual or organization. Hackers do this based on online personal or professional information. Another example is vishing attacks. These use voice calls or voice assistants to impersonate legitimate entities, convincingly persuading victims to take certain actions. Ongoing employee phishing training is vital, as well as automated solutions to detect and defend against phishing threats. At Geeks for Business, we believe that a proactive approach to cybersecurity is critical. With our trusted cybersecurity partner, Huntress, we are able to hunt for threats within networks before they become breaches. With complexity in cyberattacks rising, reacting to an attack just isn't enough; our 24/7 managed endpoint detection and response approach allows us to go on the offense against prospective cyber criminals. Tips for Defending Against These Threats As technology evolves, so do the threats that we face. Thus, it’s important to be vigilant and proactive. Here are some tips that can help: Educate yourself and others about the latest technology threats. Use strong passwords and multi-factor authentication for all online accounts. Update your software and devices regularly to fix any security vulnerabilities. Avoid clicking on suspicious links or attachments in emails or messages. Verify the identity and legitimacy of any callers or senders. Do this before providing any information or taking any actions. Back up your data regularly to prevent data loss in case of a cyberattack. Invest in a reliable cyber insurance policy. One that covers your specific needs and risks. Report any suspicious or malicious activity to the relevant authorities. Need Help Ensuring Your Cybersecurity is Ready for 2024? Last year’s solutions might not be enough to protect against this year’s threats.  Don’t leave your security at risk. We help small and medium businesses throughout Central North Carolina manage their IT, reduce costs and complexity, expose vulnerabilities, and secure critical business assets. Reach out to Geeks for Business today to schedule a chat. Article used with permission from The Technology Press.

  • Seen and Unseen: The AI You Know, The AI You Don't

    While corporate America carries out its work of finding new ways to put artificial intelligence and large language models in places they shouldn’t be, the utility of discriminative AI goes largely unheralded in the public consciousness. Companies like Microsoft and Google are spending billions marketing generative AI. Generative AI deals with the creation of new data that resembles existing data (for instance, creating a painting which is derivative of an existing work). Discriminative AI , meanwhile, deals with the classification and recognition of existing data. Discriminative models rely on supervised learning , where datasets fed into the model are labeled and in which each data point corresponds to a label or category. Discriminative AI models are used in applications such as facial recognition, spam email filtering, image recognition, and sentiment analysis.  Generative AI, meanwhile, is employed in the creation of realistic images and videos, new musical compositions, and text generation (for example, writing an essay or email on behalf of a human user). Chatbots are based on a specific type of AI known as large language modeling (LLM) . Large language models are trained on massive text datasets and are particularly adept at translation, text generation, text summarization, and conversational question-answering. Google Gemini, for instance, is a collection of LLMs that functions as a conversational chatbot–when you give Gemini a prompt, it replies in a ‘conversational’ way to approximate an interaction with another person. As a result of generative AI’s novelty and its wide array of applications in the average user’s life, Big Tech has seized on inserting it into everything from web search to online chatbots. While we struggle to fully appreciate the longer term consequences of hastily deploying generative AI models in every corner of our lives, the momentum of AI only grows. There doesn’t seem to be a consensus on the matter of whether AGI (artificial general intelligence) will ever materialize, or if it does, what risks it poses to humanity; at present, we’re still in generative AI’s infancy and our regulatory approach is still very much in flux. Currently, AGI  is still a hypothetical; theoretically, an AGI model could reason, synthesize information, solve problems, and adapt to changing external conditions like a human can. The risks inherent with such a technically capable form of AI are hard to understate. In short, when it comes to AGI: we don’t know what we don’t know. An AGI that outsmarts its creators could become impossible to control, leading to a potentially devastating sequence of unintended consequences for humanity. An AGI that decides its values don’t align with human values, for example, could shut down power grids, launch massive cyberattacks against allied or enemy nations, and be used as a powerful tool in disinformation and social manipulation campaigns.  Again, these concerns are all theoretical, but when we consider the rate at which AI as a computer science discipline is evolving, we shouldn’t discount the possibility of a future AGI. The moral-ethical and regulatory concerns surrounding AI mount by the day, and state governments are only now getting to grips with what regulating generative AI in particular will entail. The Connecticut State Senate recently introduced legislation to control bias in AI decision-making and to protect people from manufactured videos and deepfakes.  The state is one of the first in the U.S. to introduce legislation targeting AI, and the usual cadre of mostly Republican opponents has seized the opportunity to claim the bill will “stifle innovation” and “harm small business”. How, exactly, regulating generative AI will negatively affect the average small business in any meaningful way remains to be seen.  But even so, opponents to the bill may have another point: complex legislation on a complex and rapidly-evolving subject is going to bring with it unintended consequences. We find ourselves in an increasingly dire situation, then, where our legislators–largely geriatric and unplugged from the modern technological zeitgeist–are writing ineffective legislation on matters they don’t even peripherally grasp.  In fact, most computer science graduates working in their respective fields didn’t specialize in artificial intelligence, so we’re really desperately relying on a vanishingly small percentage of the population to navigate these fiendishly complex issues. AI and ML are rapidly evolving fields and, owing to their popularity, more students majoring in computer science are now focusing on AI, but there is a significant lag between graduating with a specialization and becoming an expert in that specialty. Generative AI in the form of Alexa telling us a joke is the thing we see, but the reality of trying to manage a world in which AI has embedded itself is the thing we don’t.  Since the Internet’s rise to ubiquity during the 1990s, legislation and regulation have lagged behind the explosive growth of technological advancement. We’ve been fighting an uphill battle to try to elect representatives who understand technology from a voter base that also largely doesn’t understand technology very well. As this rate of change accelerates within the disciplines of AI and machine learning, we need experts in these fields who can respond effectively to these changes. In short, we are running up against the limits of effective governance when those doing the governing aren’t digitally literate. Of equal concern is the idea that many of our aging representatives are surrounded by legions of aides and advisors who may well whisper in their ears that AI needs no regulation while those same advisors buy stakes in companies developing their own AI models. The recalcitrance that members of the Republican party have displayed on the subject of effective AI regulation is par for the course, but in this particular case their oppositional defiance is uniquely dangerous to the public. AI is a Pandora’s Box–we don’t know how an AI model will hallucinate  or how far disinformation generated by AI will spread before a human hits the kill switch.  By integrating generative AI into the social fabric, we’re essentially entrusting humanity’s combined effort and treasure to an entity that has to be constantly managed, reviewed, and course-corrected to behave in a sane and predictable way. This is a more monumental task than most people who only have a peripheral understanding of AI seem to realize.  The meteoric rise of machine learning within the field of AI seems also to be ushering in a new kind of societal disparity: technological. Those who control the algorithms that make up the body of AI will have a certain degree of power over virtually every aspect of human life; maybe this was the endgame that companies like Meta, Alphabet, and Microsoft had in mind since the outset. As we discussed in an earlier article about YouTube’s methodology for promoting, recommending, and suppressing video content, how effectively can we regulate an industry when most of its doors are sealed to the public?  It becomes increasingly clear that Big Tech is expediting its work in separating itself from society; they’ve spent the last 20 years digging a moat and creating a fiefdom that operates beyond the grips of the law. As companies like IBM, an AI forerunner in its own right, expand their influence by buying or killing the competition, power within Big Tech becomes more consolidated and key decisions in the realm of AI are made by fewer and fewer people. Maybe all of this has less to do with AI and more to do with the notion that tech companies have developed a kind of power that the world hasn’t yet seen: the power to effectively manipulate reality. If we’re all living in The Truman Show  and we don’t even know it, how would we know anything is wrong? Or, maybe we’re allowed to know there are problems with AI but only in a superficial sense. When algorithms guide you along a set of tram rails it must be asked: are these merely suggestions  by the algorithm, or are they neatly packaged directives ?  On the other hand, discriminative AI works comparatively quietly in the background and to much less public fanfare, processing massive datasets that enable so many of the services we now take for granted. And there’s good and valuable work to be done here: as the Internet grows in size, so, too, does the volume of data companies and individuals have to manage and contextualize.  Without discriminative AI models, not many of the digital experiences we enjoy would be possible. Even with AI, the amount of data that the rapidly growing number of devices on the Internet generates raises serious manageability questions for the future. There are nearly endless applications for discriminative AI in science, medicine, biotechnology, meteorology, climatology, and a number of other hard-science disciplines.  As with so many evolving technologies in life, there are important, practical uses for AI in science, research, and engineering, but the potential for abuse on the consumer-facing side is so staggering that effective legislation really can’t come soon enough. AI is a tool like any other. What we have to contend with in Big Tech is not so much limited to AI; we have to contend with a group of self-appointed technocrats who have, time after time, shown total disdain for the public good. The list of companies who openly sell your personal data to third parties (when they aren’t losing that data to cyberattacks, that is) is long and ignominious. These are the companies who present users with 157-page Terms of Service agreements which in any other context would call for review by a lawyer.  The same companies who can deplatform people or groups they find personally disagreeable. The very companies who can freeze your funds, revoke your domain, shut down your email, or delete your files–all, usually, with no real consequences from our intrepid regulatory authorities.  So, the question then becomes: do you trust that tech leaders can and will self-police with tools as powerful as these?

  • Microsoft's Enshittification of Everything

    “Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification.” Cory Doctorow, "Too Big to Care: Enshittification is a Choice" Since the release of Windows 11 in late 2021, Microsoft has made a series of increasingly unpopular changes to the Windows ecosystem, bringing into question the future of the operating system. In February 2023, Microsoft unveiled Copilot, a generative AI chatbot that served to replace Microsoft’s Cortana. While Microsoft previously claimed more than 150 million  Windows users used Cortana, that estimate was probably more than a little optimistic.  Cortana’s deep integration into Windows 10 allowed users to more effectively perform tasks with voice commands, but it lacked a clear path to monetization; that is, Microsoft couldn’t figure out how to turn Cortana into a paid service. With Copilot’s introduction in early 2023, development of Cortana ended and Microsoft redirected its Cortana resources into Copilot. While Apple makes use of ChatGPT’s AI models for its own Apple Intelligence product, Copilot is developed by GitHub (a subsidiary of Microsoft) and is powered by OpenAI’s Codex AI model.  Having joined the big tech AI chorus, Microsoft is placing Copilot and generative AI front-and-center in its consumer-facing product portfolio. Surprisingly, Microsoft is also pushing its AI agenda in business and enterprise-focused products, like Windows Azure, Entra, and Microsoft 365 for Enterprise. Considering enterprise customers are traditionally more conservative and adhere to slower adoption and upgrade cycles, Microsoft’s aggressive AI push across all fronts seems a risky gambit.  As with other consumer-facing AI services, Microsoft has positioned Copilot as a “freemium” product, offering a free tier with more limited features and a paid tier with more advanced features. To bolster this agenda, Microsoft, following in Apple’s footsteps, revealed its plans to introduce ARM-powered Windows PCs with deep Copilot integration, under the Copilot+ moniker.  Perhaps the most controversial feature of Microsoft’s Copilot+ platform is Recall , a feature that Microsoft claimed would let you search everything on your PC using natural language, thereby removing traditional barriers to finding changes you’ve made to documents, edits to photos, and so on. It didn’t take long, however, for Recall to be skewered by security researchers as a cybersecurity nightmare; relying on an unencrypted database of screenshots, Recall actually wasn’t secure at all. Anyone with local access to the computer could easily exfiltrate this database of screenshots, containing untold troves of sensitive user data.  Privacy implications for individual users aside, more questions were raised about compliance and data security in corporate and government environments. There were too many unanswered questions about the technical implementation of Recall, how easy it would be to disable, whether it would stay disabled once turned off, or how system administrators would deal with managing it at scale. Microsoft said precious little about the issue of data security and user privacy until  public backlash forced them to push back Recall’s release date  until an unknown future date.  Microsoft’s obvious bungling of Recall’s technical implementation and its initial retrenchment when faced with public criticism speaks to more insidious, deeply-ingrained problems at the company. While layoffs are common throughout the tech industry, Microsoft has often been at the fore when it comes to dismissing entire teams and divisions within the company. After finalizing its Activision-Blizzard deal in October 2023, Microsoft fired 2000 employees from its gaming division, or about 10% of all employees within the gaming unit. During the first nine months of 2023, Microsoft reduced its workforce by 16,000, outstripping the 10,000 layoffs it forecast at the beginning of 2023. In reducing its gaming unit headcount, Microsoft shuttered multiple game studios , including Arkane Austin, Tango Gameworks, and Alpha Dog Games. Microsoft’s treatment of its gaming division is only a microcosm of the wider video game industry’s treatment of its own talent: in February 2024, Sony fired 900 employees from its Playstation division, and Take-Two Interactive (parent company of Rockstar Games) announced plans to cut its workforce by 5% and end development on several games. None of this apparent dysfunction is really shocking, considering corporate acquisitions inevitably result in mass layoffs– roughly 30% of employees are deemed redundant when companies in the same industry merge . We can’t hold Microsoft to a separate standard for post-merger practices, considering the fetish for layoffs is one that’s shared throughout the Fortune 500. On the other hand, in spite of Microsoft’s massive war chest and their appetite for acquiring companies and intellectual property, its cybersecurity practices are in an apparent state of free-fall.  AJ Grotto, former White House cyber policy director, claims Microsoft is a “national security threat” , due to their monopoly position within the industry, especially within the realm of government IT. In June 2023, Chinese government-backed agents engaged in an attack on Microsoft Exchange Online,  facilitated by Microsoft’s lackadaisical security policies, leading to the U.S. Cybersecurity and Infrastructure Security Agency to demand “fundamental, security-focused reforms” to happen immediately at Microsoft. On April 2, 2024, CISA issued an emergency directive calling for immediate remediation  of a major security breach, involving Russian state actors exfiltrating data from Microsoft email systems. The CISA writes in its directive:  “The Russian state-sponsored cyber actor known as Midnight Blizzard has exfiltrated email correspondence between Federal Civilian Executive Branch (FCEB) agencies and Microsoft through a successful compromise of Microsoft corporate email accounts. The threat actor is using information initially exfiltrated from the corporate email systems, including authentication details shared between Microsoft customers and Microsoft by email, to gain, or attempt to gain, additional access to Microsoft customer systems. According to Microsoft, Midnight Blizzard has increased the volume of some aspects of the intrusion campaign, such as password sprays, by as much as 10-fold in February, compared to an already large volume seen in January 2024.” Microsoft’s position on internal cybersecurity practices seemingly hasn’t changed, in spite of CEO Satya Nadella’s commentary on Microsoft’s broken security culture.  Nadella said,  “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security. In some cases, this will mean prioritizing security above other things we do, such as releasing new features or providing ongoing support for legacy systems.” Microsoft’s various commitments to improved security across its customer-facing products seem more like nebulous promises, while its position on internal security falls somewhere between “scattershot” and “completely undefined”. Microsoft’s massive size, knowledge siloing, and responsibility to maintain huge parcels of legacy code likely all contribute to the brokenness of its internal and external cybersecurity practices. An organization the size of Microsoft requires massive investments in cybersecurity, security-awareness training for teams across all units, and external audits to demonstrate security practices are actually being followed. So far, though, Microsoft’s only incentive to improve security practices is the threat of losing market share to competitors; in government IT, which accounts for a significant portion of Microsoft’s revenue, there is no competition. Meanwhile, the U.S. government has proven itself toothless in handing down reprimands that actually hurt serial security and privacy offenders. Such fines are considered to be the cost of doing business for companies like Microsoft.  Let’s consider, then, the likelihood of the following two scenarios: (1). a serious competitor to Microsoft will appear within the next few years, and will force Microsoft to change its security and privacy practices, lower prices, and listen to customer feedback; (2). U.S. regulatory agencies will hand down multibillion dollar fines to companies like Microsoft that serve the purpose of substantially damaging the company’s financials if they fail to comply with industry regulations. Given the direction of our political institutions, including the Supreme Court, the odds of the U.S. government holding abusive monopolies to account seem poor.  Likewise, the odds of a serious competitor to Microsoft emerging anytime soon are remote at best. It seems obvious that we, as tech consumers, are arriving at a crossroads where we have to reconsider our relationships with companies like Microsoft. As more productivity software becomes web-based and the average person’s need for processing power and storage decline, Microsoft’s Windows hegemony appears precarious.  Microsoft is cognizant of the changing dynamics of end-user computing, of course, which is why it is positioning itself as a services company, not the boxed software outfit it used to be. The longer term strategy at Microsoft may well be to convert Windows itself into a monthly or yearly subscription,  if its efforts to monetize mined data from current Windows installations don’t pay the dividends it wants .  Microsoft’s whipsawing of Windows users on the issue of local user account creation in Windows 11 ties into the general enshittification of Microsoft products. Despite some changes in its stance on the matter of local accounts versus Microsoft accounts, Microsoft’s long term strategy with Windows 11 has been to discourage users from creating local user accounts when setting up Windows . While Microsoft accounts were previously optional, they are all but mandatory now; this mandate puts users who don’t have or don’t want a Microsoft account in a compromising position. More troublingly, Microsoft’s decision to make local user accounts optional only in more expensive (or unavailable to the general public) versions of Windows raises more questions about the company’s abuse of its monopoly position in the consumer computing space.  Previously available workarounds to avoid Microsoft’s account dictate are slowly being stamped out, leaving users with fewer options to use a local Windows account. In a disturbing twist, this online account mandate means that if a user’s computer doesn’t ship with compatible networking drivers, the machine can’t connect to the Internet during setup and a local account option is unavailable, leaving the user in a sort of Purgatory until compatible drivers can be integrated into a custom Windows image, which is far outside the average user’s technical capacity.  As things stand, a confluence of poor practices, anti-consumer policies, and monopoly abuse have put Microsoft in a position where governments and enterprises increasingly question their competency, and end users question the need for Windows at all. Microsoft may not have meaningful competition in government and enterprise IT, but its behavior will hand its competitors all the rope they need to hang Microsoft. While Microsoft may envision a future in which it can double-dip by monetizing user data and converting its entire portfolio to monthly subscriptions (see: Adobe ), it fails to properly heed the rising threats of Apple’s macOS and Google’s ChromeOS. In January 2013,  Microsoft Windows maintained 91% of the desktop operating system market; in November 2023, that percentage had fallen to 72% . During that same time period, Linux’s market share has grown from less than 1% to over 4%, and ChromeOS (which is based on Linux) has become a juggernaut in educational settings.  Microsoft’s insistence on ignoring the user experience, milking its government and enterprise clients for all they’re worth, and antagonizing the federal government by failing to secure its own infrastructure is leading it, and us, down the garden path to oblivion. As enshittification within the tech space accelerates, we have to reconsider what our data security and privacy are worth. A false sense of convenience has led the average user to change the way they value ownership, security, and privacy while the stakes in cybersecurity have never been higher.  Hyper-normalization of data theft, foreign espionage, and state-backed cyberattacks have led people to expect and accept piss-poor behavior from giant tech companies at a time when these companies should be held to higher standards rather than excused from any real liability.  If you do what you’ve always done, you get what you’ve always gotten. It’s time to abandon bad platforms and reject bad policies, even if it is temporarily inconvenient. Watchdog groups are toothless, and the government certainly won’t do it for you.

View All

Other Pages (21)

  • Service Plan Information - Geeks for Business

    Geeks for Business offers managed IT plans for small and medium enterprise across a variety of industries. Learn more about our IT service plans today. A Managed IT Primer

  • General Plan Pricing - Geeks for Business

    Learn more about Geeks for Business Managed IT pricing.

  • Our Work - Geeks for Business

    Find out why Geeks for Business is so highly rated by checking out our project portfolio. The Geeks for Business Portfolio Board-level repair of a MacBook Pro Performing board-level diagnosis of a MacBook Pro that doesn't turn on Ubiquiti CloudKey controller Upgrading a business client's network from a 10-year old Linksys router to a modern, cloud-manageable Ubiquiti network Type 110 punchdown block A 110 block at a business client's location; 110 blocks are modern versions of older 66 blocks and are used to distribute landline phone and Ethernet wiring through buildings. Network rack during upgrade Business client's existing 19" depth network rack, during an upgrade performed by Geeks for Business. We are removing old equipment and installing a new 24-port Ethernet switch, a new router, PDU (power distribution unit), and patch panel Patch panel A business client's new Ethernet patch panel during an extensive network upgrade performed by Geeks for Business. Patch panel Additional wiring between the new patch panel and the customer's existing Ethernet switch Plywood backing for structured wiring installation Installing plywood backing in a customer's basement, on masonry, to support a structured wiring cabinet from which all of the customer's low voltage cabling will be distributed Structured wiring cabinet Installing a structured wiring cabinet, with new Ethernet and coaxial cable distributed throughout the customer's home. Terminating Cat6 Ethernet cable Here, we're terminating Cat6 Ethernet cable with pass-through RJ-45 connectors Structured wiring installation Having installed the structured wiring cabinet, patch panel, surge protector, outlets, and other items, we're making all the connections for Ethernet, phone, and coaxial Customer's network rack Wire shelving used to store the customer's network attached storage server, AT&T fiber modem, and other items Structured wiring installation, completed Having made all the necessary connections between the switch and patch panel, cable tidying is done and the structured wiring install is complete Audio-visual rack for a church A client at a church requested that we install new AV equipment for their service recordings and Zoom services during the pandemic. Geeks for Business installed cameras, mixers, microphones, a new workstation PC, and networking equipment for the church. Workstation computer A custom-built workstation PC by Geeks for Business. Diagnosing issues with outside telephone wiring Geeks for Business performed diagnostics on outdoor phone wiring to remedy dropped Internet connections over DSL for a rural client Network rack at a hospice care facility Geeks for Business upgraded a local hospice care facility's network with new power distribution hardware, new switches, routers, and WiFi access points, and new network racks Ubiquiti WiFi access point Geeks for Business installed a new wireless network for a retail client in Chapel Hill, NC Structured wiring cabinet with patch panel An older structured wiring cabinet from a 1980s home. Geek Housecalls rewired the cabinet, installing new Ethernet and coaxial cable, as well as a new router, switch, TV antenna, and TV antenna amplifier. The patch panel is mounted above the wiring alcove as the cabinet was not wide enough to accommodate it. New outlet and TV wall mount Geek Housecalls installed a new electrical receptacle, as well as a new wall mount for a large OLED television at a client's home TV mounted on wall mount The 65" TV has been mounted on the wall mount and connected to power and coax Finished TV wall mount The final product: a 65" OLED TV on an articulating wall mount

View All

Outsourced Helpdesk & Training
Your IT Infrastructure
How Secure is Your Business?
Antimalware & Security
Disaster Recovery
Managed Networking
Managed Support
Cloud Solutions

Causes

Blog

Contact Us
Data and Privacy
SMS Opt-in Disclosure
Search

Who We Are
Our Work
About Us
General Plan Pricing
Our Guarantee

What We Do
Industries
Service Plan Information
Break-Fix Information
IT Spending & Return on Investment

Remote Support Portal

Ready for the IT support your business deserves?
Get in touch!

8801 Fast Park Drive Suite 301 | Raleigh, NC 27617 | (919) 381-8974 | (844) 949-4335 | support@geeksforbusiness.net

bottom of page