Jacob Turowski, Author at CCB Technology IT services that move your business forward Mon, 14 Jul 2025 07:55:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://ccbtechnology.com/wp-content/uploads/2021/06/cropped-4-1-32x32.png Jacob Turowski, Author at CCB Technology 32 32 Remote Work Security: Advanced Strategies Beyond VPNs https://ccbtechnology.com/remote-work-security-beyond-vpns/ Thu, 19 Dec 2024 21:49:13 +0000 https://ccbtechnology.com/?p=174885 Today, remote work is a significant and much more common aspect of people’s lives. Currently, about 32.6 million Americans work remotely at least part of […]

The post Remote Work Security: Advanced Strategies Beyond VPNs appeared first on CCB Technology.

]]>
Today, remote work is a significant and much more common aspect of people’s lives. Currently, about 32.6 million Americans work remotely at least part of the time, which is more than the entire population of Texas! As the trend of working from home grows, so too do the challenges associated with securing sensitive information and maintaining productivity. While many companies turn to Virtual Private Networks (VPNs) as the first line of defense against cyber threats, relying solely on this technology can leave gaps in security.  

In an era marked by sophisticated cyber-attacks and data breaches, it’s crucial to explore additional layers of protection that go beyond traditional VPN solutions. This means implementing comprehensive security measures, such as ZTA, MFA, endpoint protection, and end user awareness training. By adopting a proactive and holistic approach to remote work security, organizations can better safeguard their data and keep up with the modern workforce requirements. 

Embracing Zero Trust Architecture for Ultimate Protection 

With so many people taking their work outside the office, traditional perimeter-based security models are no longer enough to truly be secure. Zero Trust Architecture (ZTA), as its name would apply, is based on the premise – never trust, always verify – and offers a more sophisticated approach to security that reduces your network’s attack surface. It does this by removing the implicit trust granted to users and devices within a network. With ZTA, every access request is verified, regardless of whether it originates inside or outside the network. 

Implementing ZTA involves continuous monitoring and validation of user identities and devices, adopting least-privilege access principles, and segmenting networks to minimize lateral movement. By assuming that threats could exist both inside and outside the network, organizations can better protect sensitive data and systems from potential breaches. 

Implementing Robust Endpoint Security Solutions 

Endpoints such as laptops, smartphones, and tablets are often the weakest links in remote work security. That’s why robust endpoint security solutions are essential to protect these devices from malware, phishing attacks, and other threats. Solutions should include antivirus software, firewalls, intrusion detection systems, and advanced threat protection features.  

Endpoint detection and response (EDR) tools play a crucial role in monitoring and analyzing endpoint activities for signs of suspicious behavior. These solutions continuously monitor endpoint activities, analyze behavior patterns, and detect indicators of compromise (IOCs) associated with cyber threats. By integrating these solutions with centralized management platforms, organizations can maintain visibility and control over all remote devices, ensuring security and compliance with corporate policies.  

EDR tools also enable rapid incident response, allowing IT teams to effectively investigate security incidents, contain threats, and remediate compromised endpoints. 

Comprehensive Employee Training Programs 

Even the most advanced security measures can be quickly undermined by human error. Comprehensive employee training programs are vital to educating workers about security best practices and the latest cyber threats and trends. Training should go beyond basic protocols to cover topics such as recognizing phishing attempts, using secure communication channels, and reporting suspicious activities. 

Regularly updating and reinforcing this training is crucial to building a culture of security within your organization and involving each employee in your overall business security. Interactive training sessions, simulations, and assessments can enhance engagement and retention, making employees an active part of the organization’s security strategy. 

If you’re hungry for more insights and tips on user awareness training, dive into our other blogs on: 

Utilizing Multifactor Authentication and Biometrics 

More than 99.9% of accounts that end up compromised do not have MFA enabled. This statistic alone should be enough reason to implement multifactor authentication (MFA) if you haven’t already. MFA adds an extra layer of security by requiring users to provide multiple forms of verification before accessing sensitive systems or data. These commonly include something you know (a password), something you have (a mobile device), and something you are (a biometric factor). Biometric authentication methods, such as fingerprint and facial recognition, offer a higher level of security and convenience.  

By implementing MFA, organizations can significantly reduce the risk of unauthorized access and enhance the overall security posture of their remote work environments. However, it’s important to consider MFA fatigue, where users feel overwhelmed or stressed by the frequent need to verify their identity whenever they attempt to log in. This concern can create friction in the user experience and potentially discourage users from utilizing MFA. To mitigate this, organizations should seek to balance security with usability, perhaps by implementing adaptive authentication methods that consider factors such as the user’s location or behavior patterns, reducing unnecessary prompts while maintaining robust security practices. 

Want more on MFA? Read this next: Dispelling the Myths of Multifactor Authentication 

Regular Audits and Continuous Monitoring 

“Set it and forget it” might work for your slow cooker, but it’s definitely not the mindset you want for your IT security. While there are some time-saving methods to keep things in check, regular security audits and continuous monitoring are key. They help you spot and tackle vulnerabilities as they come up, keeping your system safe in real-time. Be proactive instead of reactive! 

Audits help evaluate the effectiveness of security measures, ensure compliance with industry regulations, and identify potential weaknesses in the system, allowing for a more proactive approach. Continuous monitoring practices, such as security information and event management (SIEM) and intrusion detection systems (IDS), help organizations detect anomalies before they escalate into larger issues. By implementing a robust framework for both audits and continuous monitoring, organizations can proactively strengthen their defenses against attacks. 

Wrapping things up 

As remote work continues to evolve, so must our approaches to Security. By integrating advanced strategies like Zero Trust Architecture, robust endpoint security, comprehensive employee training, and multifactor authentication, organizations can create a fortified environment that addresses modern threats.  

We know navigating these complex security measures can be daunting. With our Security as a Service (SECaaS) offering, we take on the heavy lifting for your remote security planning, enabling you to focus on what you do best. Our team can help ensure that your organization not only meets today’s challenges but is also prepared for tomorrow’s uncertainties.  

Stay secure, adapt proactively, and empower your workforce with the right tools and support.
Contact us today to get started! 

The post Remote Work Security: Advanced Strategies Beyond VPNs appeared first on CCB Technology.

]]>
Understanding the Importance of IT Refresh Cycles  https://ccbtechnology.com/it-refresh-cycles/ Fri, 13 Dec 2024 18:23:29 +0000 https://ccbtechnology.com/?p=174790 When was the last time you considered the age of your infrastructure? Do you have a running list of your end-user devices and their age? […]

The post Understanding the Importance of IT Refresh Cycles  appeared first on CCB Technology.

]]>
When was the last time you considered the age of your infrastructure? Do you have a running list of your end-user devices and their age? Despite our best efforts to maintain and support our systems, there comes a time when even the most carefully tended IT infrastructure can become outdated, dysfunctional, or simply unsupported.

In this blog, we’ll explore the importance of IT refresh cycles, why they are essential for keeping your organization agile and competitive, and how to plan effectively for the years ahead.

What are IT refresh cycles and why are they important

IT refresh cycles involve the regular updating and upgrading of your organization’s hardware, software, and infrastructure to keep everything running smoothly and efficiently. Think of it like maintenance on a car. Just because you purchased a brand-new car doesn’t mean it will stay that way forever. A car requires routine oil changes, tire rotations, and inspections to prevent breakdowns and maintain efficiency. Your technology is the same – needing periodic updates and equipment replacements to avoid failures and keep operations up and running.

By following a well-planned refresh routine, companies can steer clear of potential unexpected expenses or downtime caused by outdated technology. Not only does this improve system performance and help maintain compliance with industry standards, but it also supports business continuity and growth.

The benefits of thinking beyond the one-year mark

While short-term planning is sometimes easier and less time-consuming, thinking beyond the one-year mark is vital for creating a refresh plan that anticipates future technological advancements and needs, giving organizations time to seamlessly plan and integrate them into their infrastructure.

Proactive planning also helps you stretch your IT budget further as refresh cycles allow more time for businesses to take advantage of bulk purchasing discounts, better financing options, and more favorable maintenance agreements. By planning for the replacement of expensive equipment or large-scale projects, organizations can allocate funds effectively and avoid extreme budget strain. This foresight allows organizations to assess their technology lifecycle, ensuring that equipment is replaced or upgraded at optimal times rather than being forced into hasty decisions due to failures or obsolescence.

By anticipating these major expenses and scheduling them into the budget over several years, businesses can maintain operational efficiency and stability. This proactive approach provides a competitive edge, as it positions the company to quickly adapt to changes in the market and technology landscape, allowing for smoother transitions and upgrades.

Assessing your current technology

If you’re embarking on IT refresh planning, the first major step is to assess your current technology. When beginning this assessment, consider the following items to create a comprehensive evaluation.

  • Inventory your assets: Your technology is like a toolbox. You need the right tools in place to do the job correctly and efficiently (aka, you wouldn’t use a wrench to drill a hole). Inventory your current hardware and software. Create a detailed list, noting the age, specifications, and condition of each item. Once completed, you should have a clear overview of what you have and any major gaps or issues.
  • Evaluate performance metrics: A good performance evaluation will help you understand your technology’s strengths and weaknesses. Use performance metrics such as speed, downtime, and user satisfaction to gauge how well your existing equipment meets current demands. Document these metrics so you can see any glaring trends over time.
  • Review maintenance records: Take time to review your maintenance logs and support tickets. This will help you identify any recurring issues or patterns and decide which systems require more attention or should be replaced altogether.
  • Analyze future needs: If you were taking a road trip, you’d want to consider your vehicle’s capabilities and the type of terrain you’d be navigating. Similarly, it’s important to consider your organization’s future needs. Evaluate projected growth, emerging technologies, and changes in business strategy to ensure your technology can support these goals.

Developing a Refresh Strategy

Planning for your planning might sound unnecessary, but crafting a clear attack plan for your refresh cycles can really streamline the process. It sets benchmarks that keep you on track and prevent you from getting stuck.

Here are some key steps to get you started:

  • Set goals and define objectives: Clearly outline what you aim to achieve with your IT refresh. Are you looking to improve performance, reduce costs, enhance security, or support a specific business initiative? Make sure that your IT goals align with the overall business strategy. This could involve supporting new business capabilities, such as cloud migration or remote work enhancements.
  • Consider upcoming business changes: When planning a home renovation, it’s far more efficient to tackle any plumbing issues while the walls are down rather than after everything is finished. Similarly, if your business is considering a move, expansion, acquisition, or a shift in operational focus, this is a pivotal moment that often warrants a deeper examination of your technology. It presents an excellent opportunity to simultaneously plan significant IT projects or upgrades, ensuring a smoother transition and optimal performance in your new space or structure.
  • Establish a consistent assessment schedule: As mentioned earlier, assessing your technology is important. Set a timeline for regular assessments of your IT assets. Depending on your organizational needs, this could be quarterly, annually or somewhere in between.
  • Consult and involve key parties: Involve representatives from various departments (IT, finance, operations, etc.) to gather diverse insights and ensure broader needs are considered. Encourage open dialogue among parties to identify pain points and future requirements. This will help prioritize refresh initiatives based on comprehensive input.
  • Work within or establish governance structures: Integrate your refresh strategy into existing IT governance frameworks to ensure accountability and define roles and responsibilities for managing the refresh cycle. Consider how your refresh strategy impacts risk management, especially concerning data security and compliance. Ensure that upgrades mitigate risks and do not introduce vulnerabilities.
  • Develop a budgeting plan: Allocate specific budgetary resources for refresh cycles, helping to avoid surprises. Factor in both the direct costs of upgrades and the potential savings from improved efficiency. Use the assessments and team feedback to prioritize which technology assets are most critical for investment. Consider the return on investment (ROI) for each proposed upgrade.

Check out our blog about Strategic IT Project Planning, which covers many of these same steps in more detail.

Leverage partnerships and skip the headache

Navigating the complexities of IT refresh cycles can be daunting, but partnering with a managed service provider (MSP), like CCB Technology, can significantly alleviate this burden. We bring expertise, insights, and resources that can help organizations develop comprehensive and realistic refresh plans. With our in-depth understanding of your environment, we can identify necessary upgrades and pinpoint aging or unsupported technology before it becomes a liability.

By leveraging an IT partnership, businesses can ensure that their IT refresh cycles are not only well-planned and budgeted but also aligned with their overall IT strategy and goals. CCB Technology has helped thousands of organizations streamline their IT.

Let’s collaborate and make your IT refresh planning as seamless and effective as possible.

Looking for more? Check out our Ultimate Guide to IT Refresh Cycles

The post Understanding the Importance of IT Refresh Cycles  appeared first on CCB Technology.

]]>
PowerShell Documentation Cmdlets – The Built-in Hidden Secrets https://ccbtechnology.com/powershell-cmdlets-built-in-hidden-secrets/ Fri, 30 Nov 2018 19:39:00 +0000 https://ccbtechnology.com/?p=144711 From automating simple but time-consuming tasks, to carrying out advanced functions in Microsoft 365 that aren’t available in the graphical interface, PowerShell has near limitless […]

The post PowerShell Documentation Cmdlets – The Built-in Hidden Secrets appeared first on CCB Technology.

]]>
From automating simple but time-consuming tasks, to carrying out advanced functions in Microsoft 365 that aren’t available in the graphical interface, PowerShell has near limitless potential. Having a better understanding of how PowerShell works opens up a number of possibilities for how it can be used. In this blog, we will investigate some of the useful documentation features built directly into PowerShell and how they can assist in better understanding the cmdlets and objects that you are working with.

If you’re just getting started with PowerShell, first start with the basics – what it is, what it’s used for, and why it’s a formidable tool for administrators in today’s IT landscape.

Using Cmdlets

Cmdlets make up the core of how PowerShell is used. If you are not familiar with what they are, head over to our PowerShell Primer article to take a look. One commonly demonstrated Cmdlet is Get-Process. The Get-Process command gets the processes on a local or remote computer. Without parameters, this cmdlet gets all of the processes on the local computer. You can also specify a particular process by process name or process ID (PID) or pass a process object through the pipeline to this cmdlet.

PowerShell has an impressive way of helping users work more productively as well as assisting in remembering the larger number of Cmdlets contained within the system – Tab Complete. From a PowerShell console, simply start typing the command. After a few characters, you can press the tab key to have PowerShell complete the Cmdlet for you.

Tab complete is a lifesaver when it comes to working with PowerShell, especially when what has been typed into the console is ambiguous and the options need to be cycled through. You can either continue pressing tab to move through the list or in newer versions of PowerShell, you can press Ctrl + Space to bring up a full list of available commands.

Two of my favorite Cmdlets

Combining one’s knowledge with what’s available using PowerShell’s built-in documentation can prove to be a powerful asset. Two of my favorite Cmdlets are Get-Help and Get-Member. In this next section, I’ll break down what these Cmdlets do and how I use them to support my workflow.

Get-Help

The description for what the Get-Process cmdlet does was mentioned above. That information can be attained online or directly from the PowerShell console. Having the information directly available is one of the significant advantages of PowerShell compared to other scripting languages. Information about the cmdlets, the correct syntax, list of parameters, and even examples can all be reached without leaving the console or having to involve an outside resource. There are, of course, lots of great resources online with detailed examples and explanations – but those aren’t always readily available or accessible on a machine that may not have an internet connection or even more constraining, a machine with no graphical user interface (GUI).

Here is the example of what it looks like to the help documentation for the Get-Process cmdlet:

Note the first time you run Get-Help, you may be prompted to download updated help files. This requires an active internet connection and will take some time. This process pulls the latest help information down locally to your machine. If you are not able to run the update you can still view the help information – it just may not be the latest version. In many cases, this can still help you work through running the command, but it may not have any updated documentation.

Let’s break down this output.

The first two sections provide the name of the cmdlet and a synopsis of what the command does. A detailed description can be found just below the syntax. The syntax section explains how the cmdlet can be run. It shows the parameters that can be passed to the cmdlet, what type of objects they need to be, and whether or not a parameter is required. You can find more information on PowerShell syntax here and here.

In the remarks section, there are a few additional parameters you can pass to the Get-Help Get-Process command to view more information. Most notable is the -examples switch parameter. This provides a list of examples for how the command works. A few things for you to try out:
Get-Help Get-Process -examples
Get-Help Get-Process -detailed

This provides you with a more detailed version of the information shown above.

And you can even run Get-Help against itself to view all the ways that you can discover information about how PowerShell works. So, if you’re in the mood for some “light lunchtime reading” check out Get-Help Get-Help -full for some riveting information (ok maybe that’s just an engineer thing).

Get-Member

Get-Member is one of the other commands that I often use when working in PowerShell. PowerShell is an object-orientated language. This means that we can reference different parts or attributes of an object as we are working in the console or with a script. An object also contains different methods (or functions) that it can perform. The Get-Member command provides a view of what the object looks like. Try it yourself with the Get-Process command:

Get-Process | Get-Member

For those that aren’t aware, the vertical bar character in the middle is called a pipe. It’s located above the enter key.

When you are running Get-Member, you are looking at the members (or properties) of an object. In the example above, you are looking at the parts of the Process object (System.Diagnostics.Process). The output of the above command allows you to see what you can do with the process object, how you can interact with it, and other attributes. Some of the methods worth mentioning are start (which will start a system process) and close (which does the same thing as its name implies). The complete output is a bit long to include here, but feel free to run the above command and see for yourself.

By using a combination of Get-Help and Get-Member, you can get a better understanding of how cmdlets interact with each other and objects. In a future article, we’ll investigate starting to harness the capabilities of PowerShell and how it can help save you time and effort when it comes to managing an environment.

The post PowerShell Documentation Cmdlets – The Built-in Hidden Secrets appeared first on CCB Technology.

]]>
Microsoft Azure Migration: Getting Ready to Move to the Cloud https://ccbtechnology.com/microsoft-azure-migration/ Fri, 02 Nov 2018 18:06:41 +0000 https://ccbtechnology.com/?p=144539 My son received a new Lego set for his birthday and was excited to build the image shown on the box. He wasn’t necessarily aware […]

The post Microsoft Azure Migration: Getting Ready to Move to the Cloud appeared first on CCB Technology.

]]>
My son received a new Lego set for his birthday and was excited to build the image shown on the box. He wasn’t necessarily aware of it, but he had to strategize before starting. He needed to consider where and when to build it and then create a process for assembling it.

A Microsoft Azure migration does not have to be difficult, but it does require careful planning to be successful while limiting the impact on business. What parts of the network will be migrated? What’s the timing? Will it be done all at once or will there be multiple phases? Then, what is the process you will follow to accomplish the migration?

As with Lego sets, you need to follow a process. A strategic plan is required so that each piece of the migration fits together into an optimal solution. Businesses need to set clear priorities, decide the migration order, and determine the necessary resources.

Pre-Migration Considerations

While the concept of a Microsoft Azure migration is intriguing, there are considerations to be aware of before the planning phase begins.

  • Compliance requirements can throw up a roadblock if you have sensitive data that that needs to be migrated.
  • Proprietary technology, which some businesses use, may not be able to be deployed to Azure for legal reasons.
  • Platform-specific applications can be hindered by platform lock-in, making it challenging to move between platforms.
  • Insufficient bandwidth is an often-overlooked consideration that can result in frustration and lost productivity. As part of your assessment, perform an analysis of all network traffic to create a baseline that will help determine the amount of bandwidth needed to meet demand.
  • System downtime is inevitable in the migration process, so plan accordingly by carefully estimating how much downtime each step of the migration may require.
  •  Application and system compatibility can be an issue when running older versions of software. The key here is testing by creating a test environment and documenting as you test.
  • Research management systems to determine the best option for your situation so it’s ready to go before you migrate.
  • Analyze security requirements. Although the cloud is most often more secure than a traditional infrastructure, you may have additional security needs. Here are insights into what Microsoft has in place for Azure security.

Assessing On-Premise Resources

To build the Lego set correctly, you need to inventory what’s in the box and read the instructions to have an idea of how everything fits together. A Microsoft Azure migration starts with a solid knowledge of your infrastructure and how the all the parts work together. Performing a comprehensive infrastructure assessment is the best place to start.

An assessment will:

  • Assess on-premises servers and applications: Understanding current server configurations, how they’re being used and what type they are, helps to determine the current capacity.
  • Identify application and server dependencies: It is critical to understand which servers are supporting which applications and how they affect each other.
  • Analyze Configurations: This will determine which workloads will migrate without modifications and those that won’t. It will also provide guidelines to remediate potential issues and identify possible configuration changes.
  • Cost Planning: Now that you’ve collected resource data, such as memory, storage, and CPU usage, it’s time to estimate your costs using the Azure Calculator.

Building a Proof of Concept

Creating a proof of concept is an excellent idea before migrating the entire infrastructure to Azure. You can’t anticipate all possible issues during a proof of concept, but you can get a better understanding of the challenges you may face.

Microsoft offers a 30-day account that allows for $200 in credit to explore Azures capabilities, plus over 25 services that are free. You can deploy and test and get insight into the data capabilities available.

If you need help in planning how to set up your test environment, give CCB a call. We can connect you with a sales engineer to talk through what your objectives and goals are before you start.

Microsoft Azure Migration Approaches

The four most common strategies for migrating to Azure are:

  • Rehost: Often referred to as a “lift and shift” migration, this allows existing applications to migrate to Azure quickly, by substituting the cloud infrastructure for yours with no modifications to your architecture. The downside is that this type of migration doesn’t take advantage of the elasticity of the cloud platform, which diminishes your cost savings.
  • Refactor: This cloud migration approach requires small application code changes that allow you to benefit from auto-scaling. This approach can save money over time by using only the resources needed at a given moment.
  • Rearchitect: Some applications may require more extensive modification of the code to benefit from running in the cloud. Rearchitecting takes time and is more expensive initially but will save money over time.
  • Replace: Some applications are just too old and monolithic to make migrating them to the cloud worthwhile. Consider SaaS (software-as-a-service) alternatives designed for the cloud.

Little did my son realize all the considerations and planning needed to successfully build his Lego set, but the logical aspect overshadowed the excitement of what he was going to accomplish. He was thrilled by what he created in the end.

Careful planning, testing, and execution are a necessary part of the process, but enjoy the journey, knowing that the result will be a well-performing and cost-saving Microsoft Azure migration.

Ready to take the next step in your Microsoft Azure migration? From assessing your resources to choosing a migration approach and getting started, CCB can help.

The post Microsoft Azure Migration: Getting Ready to Move to the Cloud appeared first on CCB Technology.

]]>
Microsoft Azure vs Traditional Infrastructure https://ccbtechnology.com/microsoft-azure-vs-traditional-infrastructure/ Tue, 23 Oct 2018 21:03:26 +0000 https://ccbtechnology.com/?p=144046 Azure vs traditional infrastructure first came up at a former employer’s company meeting, the CIO shared his vision for the upcoming year. It included keeping […]

The post Microsoft Azure vs Traditional Infrastructure appeared first on CCB Technology.

]]>
Azure vs traditional infrastructure first came up at a former employer’s company meeting, the CIO shared his vision for the upcoming year. It included keeping the central core of the infrastructure (servers, storage arrays, switches, etc.) on-site in a central location, then adding a remote office infrastructure that would only contain the necessary equipment needed to operate – for example, switches, firewalls, and miscellaneous vendor-provided devices.

As I listened to him share his vision, I was intrigued by the concept since we were considering opening a new remote office. There were questions that I struggled with though… Was it possible to not have servers on-site? What happens if the connection between the central and remote locations goes down? When that happens, how are business-critical resources accessed? I put Azure vs traditional infrastructure side by side to find out. 

Traditional Infrastructure

Traditional infrastructures offer a sense of control and security over relevant business data, applications, and infrastructure and that control is why many stay with a traditional infrastructure instead of moving to a cloud-based platform. Owning the physical equipment and software and having it on-premises, allows control of physical access and if implemented well, can yield many benefits.

However, some limitations hinder their potential:

  • Traditional infrastructures can be complex and rigid, preventing them from adapting to changes necessitated by business situations.
  • Traditional infrastructures require comprehensive planning from the start to prevent ad hoc infrastructures that can jeopardize business goals.
  • Traditional infrastructures can be challenging to scale to meet changing business requirements outside of virtualization, which requires in-depth knowledge of the virtualization platform and the physical hardware to support it.
  • One of the biggest limiters of traditional infrastructure is that businesses must continue to purchase updated hardware and software.

Introducing Microsoft Azure 

Today, we can take the whole concept of centralizing the core servers, etc., one step further and place the core infrastructure in Microsoft Azure, keeping only switches and vendor-provided devices such as modems and firewalls onsite. All critical infrastructure services like Active Directory, print and file servers, business-critical applications and more would move into the cloud utilizing one or several of the cloud solutions models.

Just as virtualization transformed the scalability and efficiency of traditional on-premises infrastructure and reduced the overall total cost of ownership (TCO), cloud providers have changed how IT professionals strategize when planning their networks.

Microsoft joined Amazon (AWS) in the cloud by creating the Azure platform, first as an internal initiative codenamed Project Red Dog, then released to developers in 2008 at Microsoft’s Professional Developers Conference. Now they are the world-leading enterprise-cloud provider, used by 90% of Fortune 500 companies.

Microsoft offers an extensive portfolio of cloud services ranging from compute to storage to IoT and more. The Azure platform provides businesses with the following benefits over traditional infrastructures.

  • Elasticity and Resilience: Traditional infrastructures are susceptible to downtime, have limited capacity and cannot guarantee a consistently high level of server performance. Azure excels in providing elasticity and resilience enabling you to build a structure that can add or reduce compute power or storage as needed.
  • Flexibility and Scalability: There are two ways to scale a traditional infrastructure: purchase physical hardware or virtualize. Whereas Azure provides the ability to quickly build, deploy and manage applications or systems as it best serves the business.
  • Deployment: Adding new servers and/or applications within a traditional infrastructure requires IT staff taking time to procure new hardware/software, set it up, then test and implement it. In Azure, businesses can deploy mission-critical applications often without upfront costs and with minimal provisioning time, allowing IT staff to focus on more pressing activities and objectives.
  • Reliability: For reliability, traditional infrastructures need to have redundancy requiring dual firewalls, ISP providers, power sources, etc., which gets expensive in time and money. With Azure, Microsoft provides the hardware and dedicated teams for implementation and maintenance. They’ve built in redundancy, from failover hardware to datacenters located worldwide. 
  • Automation: A conventional infrastructure requires in-house IT personnel to monitor all systems and handle the day to day duties like patch management and maintaining threat protection. In Azure, this is all handled by Microsoft ensuring the infrastructure continues to run smoothly, and that required security measures are in place.

You can find a full list of Azure services on Microsoft’s site. 

Azure vs Traditional Infrastructure

The advantages of moving part or all of your company’s infrastructure to the cloud include increased flexibility, scalability, ease of management and cost savings. Successful infrastructure migrations to Azure require a lot of planning. If you’re looking to move to Azure, I outline all of the things you need to consider in this article. If you’d like to learn more or want help getting started, our engineers can help you throughout the process.

The post Microsoft Azure vs Traditional Infrastructure appeared first on CCB Technology.

]]>
PowerShell: What is it & what can you do with it https://ccbtechnology.com/what-is-powershell/ Thu, 05 Jul 2018 15:58:47 +0000 https://ccbtechnology.com/?p=142240 In today’s age there are a number of ways that one can interact with and manage computer systems, ranging from standard methods like the ubiquitous […]

The post PowerShell: What is it & what can you do with it appeared first on CCB Technology.

]]>
In today’s age there are a number of ways that one can interact with and manage computer systems, ranging from standard methods like the ubiquitous graphical user interface (GUI), to command line interfaces (CLI) that some might see as a step backward to the age of terminals and green screens. These are further supplemented by additional methods like application programming interface (API) calls and web-based management interfaces.

In order to understand why there has been a shift back towards the command line one must first understand some of the basic necessities when administering computer systems on a large scale. The ability to complete repetitive tasks quickly and accurately is crucial when managing a large number of systems. Furthermore, the capability to ensure that these tasks are done in the same manor each and every time becomes paramount as it ensures that the intended results are attained.

To meet these needs, a common CLI method used today is Microsoft Windows PowerShell. Find out the basics of PowerShell, how it can be used and why it’s becoming more popular for system administrators.

PowerShell… so what is it?

PowerShell is Microsoft’s scripting and automation platform. It is both a scripting language and an interactive command environment built on the .NET Framework. To better understand what PowerShell is, it helps to understand how it’s used. One of the authoritative resources on the subject, Ed Wilson, defines PowerShell as the following:

Ed-Wilson
Ed Wilson

“Windows PowerShell is an interactive object-oriented command environment with scripting language features that utilizes small programs called cmdlets to simplify configuration, administration, and management of heterogeneous environments in both standalone and networked typologies by utilizing standards-based remoting protocols.”

There’s a lot to that definition so let’s unpack that a little more.

What is object-oriented?

An objected-orientated language can be defined as a form of logic – it’s a way to understand how the platform or language behaves. An object is something that has one or more attributes and one or more methods or functions. Here are some examples:

Think of a television remote control. Its attributes are the size, shape, color, number of buttons, and other things of that nature. Its functions include turning the television on and off and adjusting the volume.

A car is another good example. Its attributes are things like it’s current speed, license plate number or location. Its methods are moving, parking, accelerating or slowing down.

A final example of an object is a dog. Its attributes are mood, color, breed, and energy level and methods are playing, sleeping, barking or chasing a tail.

What are Cmdlets (or Command-lets)?

PowerShell is made up of a collection of commands that carry out particular functions or tasks. On the backside of the cmdlet there are a number of things happening: the command that is executed is working with classes, methods, multiple objects, possible API calls and many other things in order to carry out its job. The advantage of PowerShell is that you don’t have to understand all of these backend principles in depth since the cmdlets take care of those processes.

To assist in the use of cmdlets PowerShell follows a verb-noun naming pattern to help users understand the purpose of the commands. Example verbs include New, Set, Get, Add and Copy. Microsoft has a documented list of approved verbs and their intended uses to help maintain consistency throughout the platform. When placed together with nouns, you get cmdlets such as:

Get-Help
Get-Process
Get-Member

How is PowerShell used?

PowerShell has many uses and often is only limited by one’s creativity. As mentioned earlier, PowerShell functions both as an interactive language as well as a scripting tool. Both use cases allow for easier administration of systems as well as a great deal of flexibility for IT professionals.

When being used as a CLI to interact directly with a system, one of the major benefits of PowerShell is the ability to remotely connect to another system. An administrator can use a remote PowerShell session to connect to a server that’s not in the same physical location and run commands as if he or she were working directly on that server. A broad range of administrative tasks can be done remotely, saving IT professionals hours of time.

It also allows administrators to run the same commands against multiple servers at the same time, providing further time-saving benefits.

When it comes to creating PowerShell scripts, the ability to perform consistent tasks and steps repeatedly is a huge benefit for IT administrators. PowerShell automates many tasks, from the complete roll out of a new server in a virtual environment, to the configuration of new mailboxes in Microsoft 365 and a host of additional functions in-between.

In their simplest form, PowerShell scripts are a collection of PowerShell commands. This makes the transition from working with individual commands in the CLI to a fully automated script straight forward.

What can you do with PowerShell?

Now that you have some basic knowledge of what PowerShell is and how it’s used, let’s explore what you can do with it.

First, it’s important to note that PowerShell is not going away. Despite the move from the olden days of green screens and the CLI to graphical user interfaces for almost everything we do, there is a trend toward moving things back to the CLI. There are many reasons for this, but one centers around the development lifecycle.

GUIs are usually the form of a wrapper that ultimately is running code or commands on the backend when an action occurs like clicking a button. This means that the underlying code still needs to be written for the GUI to function. By cutting out the graphical piece and just using the PowerShell code, companies can more quickly roll out changes and updates without having to worry about also updating and testing a GUI in addition to the code, which is often time consuming.

PowerShell is tightly integrated into almost all of Microsoft’s products. In fact, there are certain actions in popular products like Microsoft 365 and Server 2016 that cannot be done with a GUI and can only be done with PowerShell. Along with being 100% necessary for certain tasks, the ability to automate with PowerShell makes understanding it a worthwhile skill for many IT professionals.

Second, once you start understanding all that can be done with PowerShell, it opens a whole new set of capabilities. From basic automation, to advanced scripting, PowerShell can provide an abundance of opportunities for simplifying tasks and saving time.

In future entries we will look at in-depth uses of PowerShell scripting and how it can be used to simplify many areas of an IT environment including: server configuration and deployment, user creation and auditing and administrative tasks in M365. In the meantime, these resources are a great launching point for learning more about PowerShell.

PowerShell Documentation:
https://docs.microsoft.com/en-us/powershell/

PowerShell Scripts:
https://github.com/powershell
https://www.powershellgallery.com/

PowerShell Blogs:
https://blogs.technet.microsoft.com/heyscriptingguy/
https://kevinmarquette.github.io/
https://www.planetpowershell.com/

Continue reading about PowerShell 

The post PowerShell: What is it & what can you do with it appeared first on CCB Technology.

]]>
How I Learned to Love the Cloud & Why You Should https://ccbtechnology.com/how-i-learned-to-love-the-cloud/ Thu, 24 May 2018 15:54:31 +0000 https://ccbtechnology.com/?p=140156 The demand to improve collaboration, customer experience, and the rate of product development is driving growth and acceptance of cloud solutions in business like never […]

The post How I Learned to Love the Cloud & Why You Should appeared first on CCB Technology.

]]>
The demand to improve collaboration, customer experience, and the rate of product development is driving growth and acceptance of cloud solutions in business like never before. IT professionals were slow to embrace early cloud platforms due to concerns about security, data ownership and reliability as well as being heavily invested in on-premise solutions.

I, too, had the same apprehensions and experienced firsthand the mixed emotions of considering a cloud-based solution at first. I was unsure about where our data would be residing or who might have access to it. There was something falsely reassuring thinking that if it was on-premise, it had to be secure!

Finally, like so many others, after years of staying on the ground with our infrastructure, we started to explore the benefits of cloud solutions and increased efficiencies they could provide our business. A huge impact that caused us to take a serious look at the cloud was realizing that if a disaster were to hit the office, all of our critical applications would still be functional with the right cloud solution in place.

As the number of cloud-based solutions grow every day, researching and comparing solutions can be time consuming, and in the IT world, time is a sacred commodity. Although I can’t make your final platform selections for you, I hope to provide you with insight into the advantages of cloud over on-prem options and the types of cloud solutions available to give you a foundation to work from.

Cloud vs. On-premise Infrastructures

Cloud solutions provide major benefits to both businesses and individuals over traditional infrastructure, including:

  • Accessibility: With cloud solutions, users can access data anywhere on any device, providing collaboration across all aspects of a business. Writing this blog is a perfect example – I have my Microsoft Word file open on an iPad, MacBook Pro and Surface Book. All show progress in real time as I write, whereas without a Microsoft 365 subscription, I would have three different versions of my document between the devices.
  • Cost Control: The cloud helps control costs through predictable subscriptions for enterprise-class infrastructure solutions, eliminating the heavy capital expenditure of an on-premise infrastructure. Additionally, pay-as-you-go models for some solutions mean you can add or subtract individual services and only pay for what you use.
  • Scalability: Cloud solutions provide easy and often instantaneous scalability versus the cumbersome process of procuring hardware and software for a traditional infrastructure, which can take weeks or longer.
  • Deployment: With cloud solutions, businesses can deploy mission-critical applications without any upfront costs and with minimal provisioning time, allowing IT staff to focus on more pressing activities and objectives. It can also help to reduce the time needed to get new applications and services to market.
  • Reliability: Though a concern of most during the introduction of the cloud, technology advancements are making cloud solutions even more reliable and consistent than on-premise IT infrastructures. Most providers today offer Service Level Agreements (SLAs) guaranteeing close to 95% and higher uptime, and 24/7/365 availability.
  • Security: This is a central focus for any cloud provider. Cloud solutions today provide greater security than on-prem counterparts because data is stored throughout multiple highly secured locations yet can be accessed no matter what happens. A very simple example is a user who loses a laptop. If it’s managed using a cloud solution, the company can remotely wipe any sensitive information and protect its most important asset – it’s data.

Cloud Solution Models

Now that you know some key benefits of cloud infrastructures, let’s look at the three types of cloud solutions to find out what’s right for you:

Software as a Service (SaaS)

SaaS replaces traditional IT applications with a cloud model that is a software subscription provided by a third-party vendor. This is the most common cloud solution being utilized by businesses because it provides benefits to an organization such as the elimination of software updates, centralized management, and access through any device over the internet.

Examples of these services include Microsoft 365, Trend Micro Worry-Free Business Services, WatchGuard Cloud, Dropbox and Salesforce. When I was just entering the cloud, these were solutions that greatly reduced the time and energy I was spending on installing and upgrading software.

Infrastructure as a Service (IaaS)

IaaS provides the foundation for cloud IT infrastructures. This model is all about IT operations and typically provides access to network features, data storage space, and computers, while allowing the highest level of management control and flexibility.

Google, Amazon AWS, and Microsoft Azure are third-party sources for IaaS, providing the ability to only pay for what you use – basically like renting the resource. That means if you’re coming into a slower season of business, you can power down three of the five web servers running and pay accordingly.

Platform as a Service (PaaS)

PaaS allows organizations to build, run and manage customized applications without the need or worry associated with on-premise infrastructures, making it easier for developers to create efficiencies as a part of the application development process. Among the benefits of PaaS are a reduction in overhead and an increase in the speed of development and deployment. Microsoft Azure would fall into this category as well.

Here’s a good resource if you want a deeper dive into the types of cloud models.

Know What to Look For in a Cloud Solution

When considering a cloud solution for your organization, these are important factors you should know:

Service Level Agreements (SLAs)

SLAs outline the service expectations and responsibilities between your company and a cloud supplier. It should state the metrics used for measurement and any penalties if the services don’t meet those expectations. These agreements are for both party’s protection and necessary to build a successful relationship.

High availability is important and is expressed as a percentage of uptime in a given year, or the “number of nines”. (Note that maintenance windows for patching, deploying new systems, etc. are not considered downtime.) I have seen a few providers that promise “five nines”, equating to downtime of only 5 minutes and 15 seconds per year, and fail to meet that goal. You will need to decide what is acceptable for your organization to continue to operate. Discuss this with a potential provider and then get it in writing.

Redundancy

Redundancy or high availability in cloud computing means that multiple copies of your data exist or systems that can be accessed if your cloud solution fails. When talking with a cloud solution provider about their redundancy or disaster recovery plan, make sure to ask:

  • How redundant are your data centers regarding power, ISPs and other resources?
  • What happens when server solution ‘A’ is installed or goes down?
  • How are the backend systems set up (i.e.: web or SQL servers)? Are they clustered?
  • What happens if the site where the solution is hosted goes down? Will the cloud solution still be available?
  • What automation is in place to make sure my systems remain operational when a disaster happens at a primary site?

Hidden Costs

Even when a vendor provides a quote, there may be hidden costs you should look out for. I learned this the hard way in my previous role when I received the first invoice for our new cloud solution – something that was hard to explain to the executive team! Learn from my mistake and be sure to review the quote’s fine print carefully and ask thorough questions.

How I Learned to Love the Cloud

When I was an IT manager at my previous company, the data we worked with daily was very sensitive (containing PII) and securing it was the highest priority for me, my team, and the success of our company. Part of my hesitation to move to the cloud was due to the data breaches I seemed to be hearing about in the news every other day. It seemed impossible to decide which pieces we could migrate without compromising our security.

As the business grew, my direction and mindset needed to change regarding cloud solutions. We needed to be able to scale rapidly, collaborate efficiently and have access anywhere at any time. We finally chose to migrate to Microsoft 365 from an on-prem Exchange environment. I was initially blinded to its benefits because of my security fears and wanted to retain total control like I could with our on-premise servers.

However, as we began using Microsoft 365, I quickly started to see the organizational benefits: teams collaborating efficiently in groups, simultaneous sharing and editing of documents, and meetings that no longer required being in the same building. It’s as if my eyes had been opened to a whole new world.

Yes, we still needed to protect sensitive data, but the cloud allowed us to quickly expand our resources at a much lower cost than what we could implement in our on-premise infrastructure. Most importantly, it allowed me to sleep better at night knowing if a disaster hit, we’d still be functional for our clients. I learned to keep my feet on the ground and love the cloud.

QUESTIONS ABOUT THE CLOUD?

Want to know how your business can benefit from cloud solutions? We’d love to discuss your needs and help you roadmap your migration. Let us help.

The post How I Learned to Love the Cloud & Why You Should appeared first on CCB Technology.

]]>