Quantcast
Channel: TechNet Blogs
Viewing all 17778 articles
Browse latest View live

Guidelines for Publishing to the Cortana Analytics Gallery

$
0
0

In a previous blog post, we talked about how the Cortana Analytics Gallery offers our community of data scientists and developers a place where they can discover and share advanced analytics solutions created using the Cortana Analytics Suite.

The Gallery steadily continues to grow each day, with over 886 published entities now. So why publish to the Gallery? For one thing, it's a great way to share your work with colleagues and the wider community, to contribute to a public body of work based on all the tools in the Cortana Analytics Suite. Also, as others in the community build on your solutions, their feedback and comments can help you learn and grow your skills, and popular contributions can even help you establish an online reputation and following.

Contributing to the gallery is easy. The guidelines and tips below will help you through the relatively simple and short publishing process. You can also go to this collection where we have additional details and examples of good gallery content. We also have FAQs here.

Suggestions for Publishing and for Quality Documentation

  • While you can assume that the reader has prior data science experience, it still helps to simplify your language and explain things in detail wherever possible.

  • Not all readers will be familiar with the Cortana Analytics Suite, given that it is relatively new; therefore, provide enough information and step-by-step explanations to help such readers navigate through your work.

  • Visuals including experiment graphs or screenshots of data can be very helpful for readers to interpret and use your content the right way. See this collection for more information on how to include images in your documentation.

  • If your dataset is part of your experiment and not being imported through a reader module, it is part of your experiment and will get published to the Gallery. Therefore, ensure that the dataset you're publishing has appropriate licensing terms for sharing and downloading by anyone. Gallery contributions are covered under the Azure Terms of Use.

Process for Publishing Azure ML Experiments

When you are ready to publish to the Gallery, follow the five steps below.

1. Fill out the title and tags fields. Keep them descriptive, highlighting the techniques used or the real-world problem being solved, for instance, “Binary Classification: Twitter Sentiment Analysis”.

2. Write a summary of what your content covers. Briefly describe the problem being solved and how you approached it.

3. Use the detailed description box to step through the different parts of your experiment. Some useful topics to include here are:

  • Experiment graph screenshot.
  • Data sources and explanation.
  • Data processing.
  • Feature engineering.
  • Model description.
  • Results and evaluation of model performance.

You can use Markdown to format as needed. Click the Preview icon to see how things will look when published. The examples in this collection show what to include and how you might organize the information.

TIP:The box provided for Markdown editing and preview box is quite small. We recommend that you write your documentation in a Markdown editor and paste the completed document into the text box.  After you have published your experiment, you can use standard web-based tools in Markdown for editing and preview, to make necessary tweaks and corrections.

4. Upload a thumbnail image for your gallery item. This will appear at the top of the item page and in the item tile when browsing the gallery. You can choose an image from your computer or select one of the stock images.

5. Choose whether to publish your content publicly, or have it only accessible to people with the link.

TIP:If you want to make sure your documentation looks right before releasing it publicly, you can publish it as unlisted first, and then switch it to Public from the item page.


That’s it – you’re all done.

You can now view your experiment in the Gallery and share the link with others. If you have published it publicly, your experiment will show up in browse and search results in the Gallery. You can also edit your documentation on the item page any time you are logged in.

TIP:To make changes to the experiment you have published, go back to the experiment in Azure ML Studio, make changes, and publish again. By default, it will update your existing published content and not create a new one.

Post-Publishing

Now that you have published your work to the Gallery, others can download and start using it right away, either to learn or for use in their own projects.

Subscribe to Comments

People often use comments on Gallery items to ask questions or discuss methodologies. Make sure that you are notified about these – go to the bottom of the item page and create or log into a Disqus account. You will then be able to click Subscribe to be notified by email when there is a new comment on that item:

 

Share Your Work

You can share Gallery content via Twitter, LinkedIn and email directly from the right hand side of the item page:


Sharing Gallery content via blogs and social media is a great way for you to showcase your work and get broader community feedback.

 

 ML Blog Team

 


The first Technical Preview of Microsoft Azure Stack announced

$
0
0

Today, we announced the first Technical Preview of Microsoft Azure Stack. Technical Preview bits will be available this Friday, January 29. Check out Mike Neil’s blog post on Azure.com to learn more.

The complete guide to Microsoft WSUS and Configuration Manager SUP maintenance

$
0
0

~ Meghan Stewart | Support Escalation Engineer

I’ve recently seen a lot of questions about Windows Server Update Services (WSUS) maintenance for Configuration Manager environments so I wanted to take a minute and hopefully address some of them here. Usually the questions are along the lines of “How should I properly run this in a Configuration Manager environment?”, or “How often should I be running this maintenance?” I have also seen extremely conscientious Configuration Manager administrators be completely unaware that WSUS maintenance should be run at all. After all, most of us just setup WSUS servers since it is a prerequisite for a Software Update Point (SUP). Once the SUP is setup, we close the WSUS console and pretend it doesn’t exist anymore. Unfortunately, this can be problematic for our Configuration Manager clients and the overall performance of the WSUS/SUP server.

So with the understanding that this maintenance needs to be done, I bet you’re wondering what maintenance you need to do and how often you need to be doing it. The answer is you should be doing this maintenance monthly and I’ll show you how below. Running the proper maintenance is pretty easy and doesn’t take very long for WSUS machines that have been well maintained from the beginning, however be aware that if you have never run WSUS maintenance before and the WSUS computer has been in production for a while, the cleanup may be harder the first time you run it, but will be much faster in the subsequent months.

Important Considerations

Before we get started, it’s important that I mention a few things:

  1. You should read all of the instructions before starting this process, as you may realize that you need to do steps in the middle of the article before you are able to go through the process from the start of the article to the end.

  2. Remember that when doing WSUS maintenance when you have downstream servers, you add to the WSUS servers from the top down, but remove from the bottom up. So if you are syncing/adding updates, they flow into the top (upstream WSUS server) then replicate down to the downstream servers. When you do a cleanup, you are removing things from the WSUS servers, so you should remove from the bottom of the hierarchy and allow the changes to flow up to the top.

  3. It’s important to note that this WSUS maintenance can be performed simultaneously on multiple servers in the same tier. You do however want to make sure that one tier is done before moving onto the next one when doing a cleanup. The cleanup and re-index steps I talk about below should be run on all WSUS servers regardless of whether they are a replica WSUS server or not (see section 4 below for how to determine if the WSUS is a replica).

  4. This is a big one. You must ensure that you do not sync your SUPs during this maintenance process as it is possible you will lose some of the work you have already done if you do. You may want to check your SUP sync schedule and set it to manual during this process.

  5. Note that If you have multiple SUPs off the primary site or CAS that do not share the SUSDB, consider the WSUS that syncs with the first SUP on the site as residing in a tier below the site. For example, my CAS site has 2 SUPs. The one named “New” syncs with Microsoft update. This would be my top tier (Tier1). The server named “2012” syncs with “New” and it would be considered in the second tier and can be cleaned up at the same time I would do all my other Tier2 servers, such as my primary site’s single SUP.

clip_image001

How to run WSUS maintenance

The four basic steps necessary for proper WSUS maintenance include the following:

  1. Backup the WSUS database
  2. Run the WSUS Server Cleanup Wizard
  3. Re-index the WSUS database
  4. Decline superseded updates

I go through each of these below.

1. Backup your WSUS database

Backup your WSUS database (SUSDB) using whichever method is your favorite.

2. Run the WSUS Server Cleanup Wizard

The WSUS Server Cleanup Wizard can be launched from the console. It is located under Options as shown here:

clip_image003

NOTEIf you have not done maintenance before, run step 3, then 2, then 3 again. The initial re-index will help the cleanup go faster.

Please be aware that if the WSUS Server Cleanup Wizard has never been run and the WSUS has been in production for a while, the cleanup may time out. In that case, re-index with Steps 2 and 3 first, then run the cleanup with only the top box checked (unused updates and updates revisions). This may require a few passes. If it times out, run it again until it completes, then run each of the other options one at a time. Lastly make a “full pass” with all options checked. See the following TechNet documentation for more information:

Use the Server Cleanup Wizard

clip_image005

The cleanup is finished once it actually reports the number of items it has removed. If you do not see this returned on your WSUS server, it is safe to assume that the cleanup timed out and you will need to start it again.

clip_image007

3. Re-index the WSUS database

After the cleanup is finished, you need to re-index the WSUS database (SUSDB) with the following script:

http://gallery.technet.microsoft.com/scriptcenter/6f8cde49-5c52-4abd-9820-f1d270ddea61

The steps to run the script are different depending on whether you installed SUSDB on SQL Server or on Windows Internal Database (WID). This was specified when you actually installed SUSDB. If you are not sure which you used, you can check a registry key on the WSUS server located at HKLM\Software\Microsoft\Update Services\Server\Setup to verify. Look for the SQLServerName value. If you see just a server name or server\instance, you are using SQL server. If you see something that has the string ##SSEE or ##WID in it, you installed on Windows Internal Database, as demonstrated below:

clip_image008

clip_image009

If you installed SUSDB on Windows Internal Database

If you installed SUSDB on Windows Internal Database (WID) you will need to install SQL Management Studio Express in order to run the re-index script. If you’re not sure which version of SQL Server Management Studio Express to install, here’s an easy way to figure that out:

  • For Windows Server 2012, go to C:\Windows\WID\Log and find the error log that has the version number you’re using. Lookup the version number here:

321185 - How to determine the version, edition and update level of SQL Server and its components (https://support.microsoft.com/en-us/kb/321185)

This will tell you what Service Pack level it is running. Include the SP level when searching the Download Center for SQL Management Studio Express as sometimes it does matter.

  • For Windows Server 2008 R2 or below, go to C:\Windows\SYSMSI\SSEE\MSSQL.2005\MSSQL\LOG and open up the last error log with Notepad. At the very top there will be a version number (e.g. 9.00.4035.00 x64). Lookup the version number here:

321185 - How to determine the version, edition and update level of SQL Server and its components (https://support.microsoft.com/en-us/kb/321185)

This will tell you what Service Pack level it is running. Include the SP level when searching the Download Center for SQL Management Studio Express.

Once SQL Management Studio Express is installed, launch it and it will prompt you to enter the server name to connect to:

  • If your OS is Windows Server 2012, use \\.\pipe\MICROSOFT##WID\tsql\query
  • If you are not running Windows Server 2012, enter \\.\pipe\MSSQL$MICROSOFT##SSEE\sql\query

NOTE For WID, you may want to run SQL Server Management Studio Express as administrator if you were not the person who installed WSUS.

TIP Alternatively, you can also use a utility called sqlcmd to run the script if it is installed. See the following TechNet documentation for more information:

Reindex the WSUS Database

If you installed SUSDB on SQL Server

If you installed on full SQL Server, simply launch SQL Server Management Studio and enter the name of the server (and instance if needed) when prompted.

Running the script

To run the script in either SQL Server Management Studio or SQL Server Management Studio Express, click on the New Query button, paste the script in the window and then click Execute. When it is finished you will see Query executed successfully along with the messages of what indexes were rebuilt.

clip_image011

clip_image013

4. Decline superseded updates

Additionally, you may want to decline superseded updates in the WSUS server so it helps your clients scan more efficiently. Before declining updates, you should ensure that the superseding updates are deployed and that you no longer need the superseded ones. Configuration Manager does have a separate cleanup where it expires updates that are superseded based on criteria that you provide it. For more information about this setting, review the Supersedence Rules heading at these links:

You can do this manually in WSUS if you wish, or you can run this PowerShell script. Simply download the script and rename it with a .PS1 extension. Please note that I am providing this script “as is” and it should be fully tested in a lab before being used in production. Microsoft makes no guarantees regarding the use of script in any way.

NOTE You always want to run the script with the –SkipDecline parameter before running the decline so you get a summary of how many superseded updates you are about to decline.

I normally recommend to run the script on the WSUS servers if you choose to expire superseded updates immediately in Configuration Manager. I run this once a quarter in my environment. This should be done on all autonomous WSUS servers in the Configuration Manager/WSUS hierarchy. This does not need to be run on WSUS servers set as replica such as Secondary Site SUPs. If you are unsure, verify the setting on your WSUS.

clip_image015

If you do not expire updates immediately in Configuration Manager, you will need to set an exclusion period that matches your Configuration Manager setting for number of days to expire superseded updates. In this case, it would be 60 days since I specified to wait 2 months in my SUP properties.

clip_image017

Examples on how to run the script using PowerShell running as administrator:

Decline-SupersededUpdates.ps1 -UpdateServer SERVERNAME -Port 80 -SkipDecline
Decline-SupersededUpdates.ps1 -UpdateServer SERVERNAME -UseSSL -Port 8351
Decline-SupersededUpdates.ps1 -UpdateServer SERVERNAME -Port 8530
Decline-SupersededUpdates.ps1 -UpdateServer SERVERNAME -Port 8530 –ExclusionPeriod 60

Running the script with a –SkipDecline and –ExclusionPeriod 60 to gather information about my WSUS and how many updates I will decline:

clip_image019

Running the script with –ExclusionPeriod 60:

clip_image021

Running the script to decline the rest of the superseded updates:

clip_image023

Troubleshooting

What if I find out I needed one of those updates I declined?

If you decide you need one of these declined updates in Configuration Manager for some reason, you can get it back in WSUS by right-clicking on the update and selecting Approve. Change the approval to Not Approved and resync your SUP to get the update back in.

clip_image025

If the update is no longer in your WSUS, you can import it from the Microsoft Update Catalog as long as it has not been expired from the catalog.

clip_image027

 

HELP! My WSUS has been running for years without ever having maintenance done and the cleanup wizard keeps timing out.

There are really two different options you can take here:

1. Reinstall WSUS with a fresh database.

2. Ensure you have a backup of the SUSDB then run a re-index. When that completes, run the following stored procedure in SQL Server Management Studio or SQL Server Management Studio Express. After this finishes, follow all of the above instructions for running maintenance. This last step is necessary because the stored procedure here only removes unused updates and update revisions.

DECLARE @var1 INT

DECLARE @msg nvarchar(100)

 

CREATE TABLE #results (Col1 INT)

INSERT INTO #results(Col1) EXEC spGetObsoleteUpdatesToCleanup

 

DECLARE WC Cursor

FOR

SELECT Col1 FROM #results

 

OPEN WC

FETCH NEXT FROM WC

INTO @var1

WHILE (@@FETCH_STATUS > -1)

BEGIN SET @msg = 'Deleting ' + CONVERT(varchar(10), @var1)

RAISERROR(@msg,0,1) WITH NOWAIT EXEC spDeleteUpdate @localUpdateID=@var1

FETCH NEXT FROM WC INTO @var1 END

 

CLOSE WC

 DEALLOCATE WC

 

DROP TABLE #results

Automating WSUS maintenance

I’m often asked whether these WSUS maintenance tasks can be automated, and the answer is yes, assuming that a few requirements are met first.

1. If you have never run WSUS cleanup, you need to do the first two cleanups manually. Your second manual cleanup should be run 30 days from your first since it takes 30 days for some updates and update revisions to “age out”. There are specific reasons for why you don’t want to automate until after your second cleanup. Your first cleanup will probably run longer than normal so you can’t judge how long this maintenance will normally take, whereas the second cleanup is a much better indicator of what is normal for your machines. This is important because you need to figure out about how long each step takes as a baseline (I also like to add about 30 minutes “wiggle room”) so that you can determine the timing for your schedule.

2. If you have downstream WSUS servers, you will need to run them first, then do the upstream servers.

3. To schedule the re-index of the SUSDB you will need a full version of SQL Server. Windows Internal Database (WID) does not have the capability of scheduling out a maintenance task though SQL Server Management Studio Express. With that said, in cases where WID is used you can use the Task Scheduler with SQLCMD mentioned earlier. If you go this route, it’s important that you DO NOT SYNC YOUR WSUS SERVERS/SUPs during this maintenance period! If you do, it is very possible your downstream servers will just end up resyncing all of the updates you just attempted to clean out. Personally, I schedule this overnight before my AM sync so I have time to check on it before my sync runs.

Links you will need and some you may possibly need:

Setting up the WSUS Cleanup Task in Task Scheduler

The easiest basic directions and troubleshooting for this step are here but I’ll walk you through the process below.

1. Open Task Scheduler and Select Create a Task. Under the General tab, set the name of the task, the user that you want to run the PowerShell script as (most people use a service account), select Run whether a user is logged on or not, then add a description if you wish.

clip_image029

2. Under the Actions tab, add a new action and specify the program/script you want to run. In this case we need to use PowerShell and point it to the PS1 file we want it to run. I use the script found here. If you would like a log, you can append the last line of the script to read:

$cleanupManager.PerformCleanup($cleanupScope)| Out-File c:\wsus\wsusclean.txt

Note that you will get an FYI/warning in Task Scheduler when you save, but this is OK and can be ignored.

clip_image031

3. Set your schedule under the Triggers tab for once a month on any schedule you wish. Again, you must ensure that you do not sync your WSUS during the entire cleanup and re-index time. This statement really is important enough for me to bold it three times in a single article.

clip_image033

4. Set any other conditions or settings you would like to tweak as well. Note that when you save the task, you may be prompted for credentials of the “run as” user.

5. You can also use these steps to configure the Decline-SupersededUpdates.ps1script to run every 3 months. I usually set this to run before the other cleanup steps, but only after I have run it manually and ensured it completed successfully. I run at 12:00 AM on the 1st Sunday every 3 months.

Setting up the SUSDB re-index for WID using SQLCMD & Task Scheduler

1. Save the script here as a .sql file (e.g. SUSDBMaint.sql)

2. Create a basic task and give it a name:

clip_image035

3. Schedule this task to start about 30 minutes after you expect your cleanup to finish running. My cleanup is running at 1:00 AM every first Sunday. It takes about 30 minutes to run and I am going to give it an additional 30 minutes before starting my re-index. This means I would schedule this task for every 1st Sunday at 2:00 AM, as shown here:

clip_image037

4. Select the action to Start a program. In the Program/script box type the following, where the file specified after the –i parameter is the path to the SQL script you saved in step 1, and the file specified after the –o parameter is where you would like the log to be placed. Here’s an example of what that might look like:

“C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLCMD.exe" -S \\.\pipe\Microsoft##WID\tsql\query -i C:\WSUS\SUSDBMaint.sql -o c:\WSUS\reindexout.txt

clip_image039

5. You will get a warning, similar to the one you got when creating the cleanup task. Click Yes to accept the arguments, then click Finish to apply.

clip_image041

6. You can test the script by forcing it to run and reviewing the log for errors. If you run into issues, the log will tell you why. Usually if it fails, the account running the task does not have appropriate permissions or the WID service is not started.

Setting up a basic Scheduled Maintenance Task in SQL for non-WID SUSDBs

NOTE You must be a sysadmin in SQL to create or manage Maintenance Plans.

1. Open SQL Server Management Studio and connect to your WSUS instance. Expand Management, then right-click on Maintenance Plans and select New Maintenance Plan. Give your plan a name.

image

2. Click on subplan1 and then ensure your Toolbox is in context:

clip_image044

3. Drag and drop the task Execute T-SQL Statement Task:

 clip_image046

4. Right-click on it and select Edit. Copy and paste the WSUS re-index script and click OK:

image

5. Schedule this task to run about 30 minutes after you expect your cleanup to finish running. My cleanup is running at 1:00 AM every first Sunday. It takes about 30 minutes to run and I am going to give it an additional 30 minutes before starting my re-index. This means I would schedule this task to run every 1st Sunday at 2:00 AM.

image

6. While you are creating the maintenance plan, you may want to consider adding a backup of the SUSDB into your plan as well. I usually backup first, then re-index. Note this may add additional time to your schedule.

Putting it all together

When running this in a hierarchy, you want the WSUS cleanup run from the bottom of the hierarchy up, but you want the decline script to run from the top down.

Since I can’t sync during the actual cleanup, I would prefer to be able to complete all tasks overnight then check on their completion via the logging when I come into the office in the morning before my next sync is scheduled. This is because in the case that something failed, I can reschedule the maintenance for the next night once I identify what failed and resolve the issue.

These tasks may run faster or slower in your environment and the timing of your schedule should reflect that. Hopefully they are faster since my lab environment tends to be a bit slower than a normal production environment. I am a bit aggressive on the timing of the decline scripts since if Tier2 overlaps Tier3 by a few minutes, it will not cause a problem.

My sync is not scheduled to run. This keeps the declines from accidentally flowing into my Tier3 replica WSUS servers from Tier2. I did give myself extra time between the Tier3 decline and the Tier3 cleanup since I definitely want to make sure the decline script finishes before running my cleanup.

This brings up a common question: Since I am not syncing, why shouldn’t I run all of the cleanups and re-indexes at the same time? The answer is that you probably could, but I wouldn’t. If my coworker across the globe needs to run a sync, with this schedule I would minimize the risk of orphaned updates in WSUS and I can schedule it to rerun to completion the next night:

Time
Tasks
12:00 AM
Tier1-Decline
12:15 AM
Tier2- Decline
12:30 AM
Tier3-Decline
1:00 AM
Tier3 WSUS Cleanup
2:00 AM
Tier3 Re-index
Tier2 WSUS Cleanup
3:00 AM
Tier1- Cleanup
Tier2 Re-index
4:00 AM
Tier1 Re-index

Special thanks to Vinay Pamnani for providing the script to decline superseded updates with an exclusion period and to The Scripting Guy

Meghan Stewart | Support Escalation Engineer | Microsoft

 fbTwitterPic

Our Blogs

ConfigMgr 2012 R2

What’s new in Microsoft Operations Management Suite: Log Analytics

$
0
0

Last week we began looking in depth at Microsoft Operations Management Suite, as part of an ongoing series to help you understand how the diverse feature set comes together. Our current focus area is Log Analytics, which allows you to take advantage of cloud resources to analyze log data from across a hybrid IT environment. One of the great things about a cloud service is that we can add features rapidly, expanding the value available to you as a user. Today we’re going to look at some of the things that have been added to the Log Analytics capabilities of Operations Management Suite in just the last 90 days.

Alert notification and automated remediation

Log analytics is about taking action on what you learn from correlating data across multiple sources. Alert notification in Operations Management Suite makes it easier to drive actions based on changing data. You can set up alerts so that if a search finds a specific set of results, an email will be sent to a list of recipients that you define in advance. Alternatively, you can trigger the execution of an Automation runbook to remediate the problem. You can also do both, so that your list of recipients is notified while the problem is being corrected.

Alert notification is designed for flexibility. Alerts can be based on any of your saved searches, and you set the timing for how often the search is repeated. You also define the parameters for acceptable results. Both lightweight and easy to manage, alerts are a key new element for Log Analytics, allowing you to build in “auto-remediation” for common problems. You can find out more about how to use this feature here.

Support for Linux, including containers

A major challenge for IT operations today is managing across both Windows and Linux. Too often, you have to choose between specialized tools that are primarily designed for one platform or the other, giving you limited visibility across the full environment. We’ve added support for Linux to Operations Management Suite to help you get a big picture view of events and information. With the OMS agent for Linux, you can collect Syslog events and performance metrics. Connecting into Docker, you can gather container logs, metrics, and inventory. You can also integrate with Linux management tools, specifically Zabbix and Nagios, for alerts. For more detail, check out this blog post.

Mobile apps for iOS, Android and Windows Phone

Information is most valuable when it’s timely, and that means you need to be able to get to it easily from a variety of devices. The mobile app for Operations Management Suite gives you access to your personalized dashboards, solutions and saved search queries. You can download the app here.

Crowd-sourced information on patching time

Last week, we talked about update assessment, and how Operations Management Suite gives you information on how long patching can take based on the experience of other users. We’re already hearing from customers about how that information is helping them ensure that patching takes place within very tight timing windows. That’s another new feature that was recently added, expanding the ways you can benefit from cloud-based management.

If you want to see all these features in action, check out our free trial for Operations Management Suite.

There’s lots more coming for Operations Management Suite in the next 90 days. Stay tuned!  And if you’re interested in the Linux capabilities specifically, check back later this week for the continuation of our tour of Log Analytics.

Check out our new guide to Microsoft WSUS and Configuration Manager SUP maintenance

$
0
0

Just in case you missed it, Microsoft’s own Meghan Stewart just published a great guide covering all the why’s and how’s to WSUS and Configuration Manager Software Update Point maintenance. Whether you’re new to update maintenance or a seasoned pro, you’ll definitely want to give this one a read. You can find Meghan’s complete guide here:

The complete guide to Microsoft WSUS and Configuration Manager SUP maintenance

J.C. Hornbeck | Solution Asset PM | Microsoft

 fbTwitterPic

Our Blogs

ConfigMgr 2012 R2 Microsoft System Center 2012 Configuration Manager Microsoft System Center 2012 R@ Configuration Manager

Chef Meetup at Microsoft Reactor

$
0
0

clip_image002

Microsoft and Chef have proven again and again, Chef and Azure go together extremely well. Case in point a cool company called kCura or Microsoft’s own MSN.

If any of this resonates and you want to find out more, Here’s a treat for you:
On February 10, 2016 Chef hosts a meetup at the Microsoft Reactorin San Francisco. Nathen Harvey, Chef’s VP of Community Development, alongside other techies from Chef and Microsoft will be at the meetup. Make sure to stop by if you are in the area.

Learn about the workflow changes Chef enables and experience a demonstration of the entire Chef workflow. Along the way, you'll learn about Chef, DevOps, continuous delivery, and compliance.

Worth your time! Register here.

Cheers,
@volkerw

PowerTip: Import colon-delimited file with PowerShell

$
0
0

Summary: Learn how to use Windows PowerShell to import a file that uses a colon as a delimiter.

Hey, Scripting Guy! Question How can I use Windows PowerShell to import a file that is delimited with a colon instead of a comma?

Hey, Scripting Guy! Answer Use the Import-CSV cmdlet and specify the colon as the delimiter, for example:

import-csv -Path C:\fso\applog.csv -Delimiter ':'

Update 1601 now available in System Center Configuration Manager Technical Preview

$
0
0
On December 8, 2015 we made the latest version of System Center Configuration Manager (ConfigMgr) generally available . Thanks to those of you who have already upgraded! As part of our ongoing commitment to quality and innovation, we are continuously previewing upcoming features with the Configuration Manager community before they are made generally available. These Technical Previews allow you to try new Configuration Manager features in your test environment and provide us with valuable feedback...(read more)

El extraño caso de Password Synchronization para usuarios cuyo password ha expirado On-Premises y puede seguir iniciando sesión a la nube con dicho Password.

$
0
0
Por: Hernan Bohl ¿Tienen ustedes usuarios cuyos por ejemplo que están en el campo y no inician sesión al ambiente On-premises? ¿Están ustedes utilizando Password Syncronization ? Si la respuesta a estas dos preguntas es sí, entonces este articulo será de su interés. Recientemente trabajé en un incidente de soporte, en donde los usuarios estaban utilizando password syncronization , pero a dichos usuarios ya les había expirado el password en el ambiente On-Premises. La política de Password On-premises...(read more)

Using Desired State Configuration and Chef to deploy System Center

$
0
0

Summary: Learn how to use Chef and DSC in Windows PowerShell to deploy System Center in this guest blog post by Jason Morgan.

Ed Wilson, here. Today I have a guest blog post by Jason Morgan in which he will talk about using Desired State Configuration (DSC) and Chef to deploy System Center. Welcome back, Jason…

Hello again. Today I’m writing to analyze the deployment of System Center by using Desired State Configuration and Chef. I’ve been using Chef heavily for the last six months because I decided that using DSC to run my System Center deployments was a too heavy lift on its own. Adding Chef has allowed me to run my System Center builds from my local lab through my test, UAT, and production environments with a single repository.

Likewise, DSC has provided me with a simple and common interface for dealing with Windows Server, which I’ve found greatly expands the administrative surface for Chef (where those resources are somewhat limited).

DSC and System Center

I’ve been taking advantage of the resources provided by Microsoft to deploy some common System Center components: SCOM, SMA, and Service Manager. In this regard, xSCOM and xSCSMA do a pretty good job for SCOM and SMA, but there is no such resource for Service Manager.

For all the System Center components I use the Group, User, Package, and WindowsFeature resources to configure the prerequisites. You can write loops in Chef to handle adding these prerequisites. The resources are called in almost the same way as in PowerShell. Here’s a quick example:

['NLB','RSAT-NLB'].each do |feature|

 dsc_resource feature do

            resource :WindowsFeature

            property :Name, feature

            property :Ensure, 'Present'

 end

end

Here I take an array of two strings, NLB and RSAT-NLB, and I assign them to a variable named Feature and loop through the WindowsFeature DSC resource.

Chef lets you store information such as users and passwords in searchable repositories called “data bags.” You can encrypt data bag items and make use of Chef’s own security system to decrypt data bag items on the individual nodes. This makes the distribution of account information especially simple. Here’s an example of searching through the items in the user’s collection with a name matching myapp and adding them to the admin group:

search('users',"id:*myapp*").each do |admin|

            dsc_resource "Administrator #{admin['id']}" do

            resource :Group

            property :GroupName, 'Administrators'

            property :Ensure, 'Present'

            property :MembersToInclude, ["#{node['serverBase']['netbiosName']}\\#{admin['id']}"]

            property :Credential, ps_credential("#{node['serverBase']['netbiosName']}\\#{install['id']}",install['password'])

            end

 end

You have the option to store non-standard resources with your cookbooks if they are not too large. Personally, I archive any non-standard resources along with the actual binaries I need for the installations. I maintain the archive within a web server built for each environment. Here is an example of using a native Chef resource, remote_file, to download the WMF5 update and store it locally:

remote_file "#{ENV['SYSTEMDRIVE']}\\Win8.1AndW2K12R2-KB3066437-x64.msu" do

 source "https://download.microsoft.com/download/3/F/D/3FD04B49-26F9-4D9A-8C34-4533B9D5B020/Win8.1AndW2K12R2-KB3066437-x64.msu"

end

There are two key takeaways here:

  • When you reference Windows paths in Chef recipes, you need to use \\ in place of \. Ruby sees the \ as the escape character, so you need one to escape the other.
  • Chef recipes can leverage environment variables with ENV[‘MyVariableName’].

As a natural segue, Chef provided me with significant benefits when working with environments. Ingredients like accounts, urls, ip schemes, and even the number of nodes in a particular app vary by environment. Chef allows you to use the environment to override values in a recipe. I won’t go any deeper into the discussion about environments here, but it is a topic you should become very familiar with if you’ll be using Chef.

Operations Manager

Operations Manager is pretty straightforward. After the prerequisites are installed, all you need to do is call the resource, for example:

dsc_resource 'WebConsole' do

 resource :xSCOMWebConsoleServerSetup

 property :Ensure, 'Present'

 property :SourceFolder, 'SCOM'

 property :SourcePath, "#{ENV['SYSTEMDRIVE']}\\Sources"

 property :SetupCredential, ps_credential("#{node['serverBase']['netbiosName']}\\#{admin['id']}",admin['password'])

 property :ManagementServer, node['scom']['mgmtServer']

 notifies :reboot_now, 'reboot[now]', :immediately

end

One thing worth mentioning is that when you use pscredential in a recipe, Chef provides a function to build the credential object:

ps_credential([string]username,[string]password)

Service Management Automation

Service Management Automation (SMA) is three resources for a full standalone instance:

  • SCSMAWebServiceServerSetup
  • xSCSMARunbookWorkerServerSetup
  • xSCSMAPowerShellSetup

Again, when the prerequisites are in place, it’s a pretty quick installation to set up a standalone server. Additionally, it’s really nice to use Chef to install the correct modules on all my runbook servers.

dsc_resource 'SMAPS' do

 resource :xSCSMAPowerShellSetup

 property :Ensure, 'Present'

 property :SourcePath, "#{ENV['SYSTEMDRIVE']}/Sources"

 property :SetupCredential, ps_credential("#{node['serverBase']['netbiosName']}\\#{smaSetupAccount['id']}",smaSetupAccount['password'])

 property :SourceFolder, 'SMAInstall'

end

Service Manager

Service Manager is a slightly different beast. As of this writing, there are no DSC resources for deploying it. Luckily, Chef has a pretty robust PowerShell_script resource, which is a lot easier to use than the DSC script resource (in my experience). You need to only write a script, put it in a string, and insert variable substitutions to change whatever you need to change.

When you are going to use it, be sure you set up a guard condition—a “not_if” or “only_if” script, which will allow your resource to be idempotent.

I had a big issue with Service Manager’s installation script. Start-Process, which I used to run the installation, wouldn’t let me swap the user running the setup. It made it difficult to run unattended at first. As with everything else in this process, Chef had a pretty easy answer for this issue. I set up the Chef client to run as a task under the appropriate account and let it run.

Here’s an example of using the PowerShell_script resource to provision disks:

powershell_script 'multiple disks' do

 code <<-MYCODE

             $disks = get-disk | where {$_.OperationalStatus -match "Offline"}

             foreach ($d in $disks)

             {

                         $d | Initialize-Disk -Passthru | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -confirm:$false

             }

 MYCODE

 only_if '(get-disk | where {$_.OperationalStatus -match "Offline"}).count -ge 1'

end

ruby, the language Chef uses, lets you setup multi line strings like powershell’s here string. The syntax is like:

<<-STRING

some text

STRING

You can wrap longer PowerShell scripts in a multiline string construct and add it as an argument for the code parameter on the PowerShell_script resource. The only_if parameter ensures that the script won’t run unless it’s necessary. Only_if and Not_if in the PowerShell script resource respond to a Boolean value returned by the script.

Why add Chef?

I wanted to highlight why I added Chef on top of DSC. Chef has some really cool functionality, which allowed me to speed up my development cycles and releases. Here are some of the key features:

Handling password distribution

One of the major issues I had with DSC on its own was securely distributing passwords. Chef made that pretty simple. You can encrypt data bag items, and by using a cool feature called chef-vault, you can use the same certificates that Chef distributes automatically to decrypt those items from appropriate clients.

Distributing resources

Because Chef executes each resource independently of the others, it lets me use DSC to copy and extract DSC resources. This was always one of my major issues with DSC.

Running PowerShell scripts

If you’ve tried to use the DSC script resource a lot, you’ve likely run into issues expanding variables. The DSC script resource won’t let you reference variables from the broader configuration without a lot of messing around. It’s complicated and hard to use. With Chef, it’s simple. I write the script and use Chef’s variable system to insert values as needed.

Running as a task

DSC runs as a task that runs as the system. With Chef, you have options: you can run as a service or as a task. That task can then be configured to run with all the arguments of any other task. I can swap the user or the execution time, and the task is simple and straightforward. Don’t get me wrong, the LCM is awesome, but I definitely don’t miss using it.

Environments

This is one of the things that most attracted me to Chef. Having the ability to set certain parameters that change depending on your environment (test/dev/uat/prod) is incredibly valuable. I write one deployment configuration (“recipe” in Chef speak), and it changes its characteristics based solely on the environment it’s in. Ultimately, I swapped from using only DSC to using DSC and Chef together when I found myself writing my own environment system.

Order of operations

Resources are always executed in order and, unless otherwise specified, a failure in a resource stops execution of the client immediately. Always knowing where you had an issue greatly simplifies any needed troubleshooting. Another benefit is that the Chef client writes the details of its last run in a log, enabling you to review the details of past runs at your convenience.

~Jason

Thank you, Jason, for a way cool blog post.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. Also check out my Microsoft Operations Management Suite Blog. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Управление рисками в Project 2016. Часть 2: Классификация рисков

$
0
0

Продолжаем знакомить наших читателей с темой управления рисками в Project 2016. Первая часть статьи - по ссылке.

Классификация рисков очень важна для последующего анализа и разработки ответов на риски. Приведу несколько примеров.

Стандарт PMBOK 5 предлагает выделить следующие категории рисков:

  1. Технические
  2. Внешние
  3. Организационные
  4. Управление проектом

...(read more)

Windows Server 2012 R2 の "保護されているネットワーク" について

$
0
0

こんにちは。Windows プラットフォームサポートの野村です。今回は、Windows Server 2012 R2 Hyper-V における新機能の一つである"保護されているネットワーク" についてご紹介いたします。

この保護されているネットワーク”を設定することで、仮想マシンの仮想ネットワークアダプターで切断が検知された場合、その仮想マシンは自動的に別の Hyper-V ホストに移動します (※この機能は、Hyper-V クラスタリングを構成している場合に機能します)

"保護されているネットワーク" Hyper-V の仮想マシンの設定ウインドウにて、対象の [ネットワークアダプター] の[高度な機能]を展開して設定することができ、既定では有効化されています。そのため、Windows Server 2012 R2 において、仮想マシンに追加されたネットワークアダプターは、いずれも保護されているネットワークアダプターとして自動的に構成されます。

 

保護されているネットワーク”の設定がされた仮想マシンにおいて、仮想ネットワークアダプターと紐づいている仮想スイッチの物理 NIC に切断が検知された場合、その仮想マシンのライブマイグレーションが行われます。この動作が実行される前には、移動先の Hyper-V ホストの仮想スイッチが正常にネットワークに接続されていることを確認します (※移動先のHyper-V ホストのスイッチでネットワーク障害を検知して、仮想マシンのライブマイグレーションが何度も行われることを避けるためです)

保護されているネットワーク”が設定されていない場合、仮想ネットワークアダプターと紐づいている仮想スイッチの物理 NIC に障害が発生しても、仮想ネットワーク上では障害の発生を検知できないため、仮想マシンのセッションの維持に影響を及ぼします。

従いまして、この"保護されているネットワーク" の機能を有効にすることで、ライブマイグレーションにより仮想マシンのセッション状態を維持できるため、ダウンタイムが生じなくなり、より高いレベルでネットワークの可用性を維持することができます。

 

<参考情報>

- Protected Networks in Windows Server 2012 R2

http://blogs.msdn.com/b/virtual_pc_guy/archive/2014/03/11/protected-networks-in-windows-server-2012-r2.aspx

- What's New in Failover Clustering in Windows Server

https://technet.microsoft.com/en-us/library/dn265972.aspx

- Windows Server 2012 R2 フェールオーバークラスタリング

  構築・運用・管理ガイド (Word 形式)

http://download.microsoft.com/download/0/7/B/07BE7A3C-07B9-4173-B251-6865ADA98E5D/WS2012R2_MSFC_ConfigGuide_v1.1.docx

Exchange 2016 Resources

$
0
0

Below are a few articles and bits from the web that I have been using to get up to speed on some of the concepts in Exchange 2016.  By no means is this list exhaustive and make sure to check TechNet and the Exchange team blog for updates :)

Bits from the Web:

What's new in Exchange 2016

Exchange 2016 TechNet

Exchange Server 2016: Forged in the cloud. Now available on-premises.

(Please visit the site to view this video)

New Zealand Ignite 2015

Deploying Exchange Server 2016

Learn how to deploy Exchange Server 2016 or Exchange Server 2013 on-premises into existing deployments of Exchange. The session includes real-world best practices and first-hand feedback.

Exchange Server Preferred Architecture

This session reviews the preferred architecture for deploying Exchange Server. These principles apply to both Exchange Server 2013 and Exchange Server 2016 due later this year. Learn how the building block architecture is intended to be designed for your organization and the principles behind the recommendations. This session covers the new server architecture in detail. This session allows you to gain deep understanding of the Exchange Server 2013 and 2016 architecture and is a "must-attend" session to begin your own design.

Exchange Server High Availability and Site Resilience

This session covers Microsoft's best practices, requirements and recommendations for deploying on-premises Exchange servers (2013/2016) in a highly-available and site resilient configuration. It also includes guidance for using an Azure IaaS virtual machine as a third datacenter to enable datacenter failover scenarios

Office Mechanics Videos on Channel 9:

Exchange Server 2016 & Outlook on the go - Mobile, browser and productivity updates

In this short demo, Allen Filush from the Outlook team walks through updated Outlook experiences in the browser to view media while working in email and integrated Outlook 2016 experiences on the desktop.

Exchange Server 2016 demo - Collaboration updates

In this 3-minute demo, Rebecca Lawler from the Exchange engineering team shows how to easily attach files stored in SharePoint. She demonstrates new capabilities to edit documents and respond to the conversations from the same view in Outlook on the web.  And she explains the two infrastructure options you can use to deploy document collaboration with Exchange Server 2016.

Exchange Server 2016 - Performance, architecture and compliance updates

Exchange Server 2016 takes what was learned running Exchange Online at scale to improve performance and reliability for on-premises Exchange infrastructure. In this short demo, Greg Taylor from Exchange engineering explains how the team has simplified the Exchange Server architecture and roles, what Exchange 2016 does to prevent and recover from data corruption and he provides an update on how Exchange can leverage cloud services for enhanced protection.

Exchange Server 2016 Smarter Inbox - Search and customization updates

In this 3-minute demo, Jason Henderson from the Exchange engineering team highlights search enhancements to provide faster and more complete results – even when searching your calendar. He also walks through updates to the extensibility platform, including the add-ins marketplace that connects your inbox to 3rd party and in-house services to help you work more effectively.

Exchange Team Blog

Exchange Server Role Requirements Calculator Update (Includes Exchange 2016)

The all powerful calculator!  No introduction necessary for this :)

Ask the Perf Guy: Sizing Exchange 2016 Deployments

Start your sizing journey by reading this post.

The Exchange 2016 Preferred Architecture

The preferred architecture is there to help you understand how to optimise your deployment of Exchange 2016 by utilising good practice direct from the Engineering Team.

Load Balancing in Exchange 2016

Always a favourite topic for an Exchange PFE and customers.

Temporary Post Used For Theme Detection (13c9ac5b-a405-4886-b35f-4d54d0846e0b - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

$
0
0

This is a temporary post that was not deleted. Please delete this manually. (a7c5abdf-2c09-4a75-a997-e94efc4c1105 - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

AAD Connect: The Three Forests Story.

$
0
0

I am writing on a grey area that I recently encountered while working on a AAD Connect project for a customer. The customer had one primary forest(forest A) and a secondary forest(Forest B) which essentially had same users represented twice (both in enabled state). In addition a third forest (Forest C) which is an extranet users forest having a disparate set of users.  A quick look at AAD Connect Supported topologies yields that this is a supported topology.(https://azure.microsoft.com/en-us/documentation/articles/active-directory-aadconnect-topologies/)

My topology would look something like this. 

MultiForestSingleDirectory

The some users would match (between Forests A and B) while some wont (Forest C).

MultiForestFullMesh

I ended up choosing the following settings for matching users, Source Anchor and UPN.

image

Now, if both user objects(matched by UPN) are enabled in Forests A and B, by default AAD Connect can project from either of the objects and join the left over one. This wasn’t acceptable as ADFS is based on Forest A hence, AAD user objects must have the value of objectGUID of Forest A in sourceAnchor.

The solution to this problem is if you allow explicitly only Forest A to project and Forest B to join alone. This can be done by tweaking the default AAD Connect config as follows

1. Open Synchronization rules editor:

image

2. Open the User Join Inbound Sync Rule for ForestB.com and edit it.

image

Note: Normally we create a custom rule and should avoid editing the OOTB rules. However, in this case we must edit the default rule.

3. Change the link type to Join instead of provision.

image

Now Forest B objects will only join to Forest A objects that are project already. Voila, we have everything working as expected now. No other changes are needed to Forest A or Forest B join rules.


Mobile Deep Linking

$
0
0

In this mobile world where more and more users prefer to work solely on mobile devices its desirable to be able to send links to each other that opens directly in the mobile client (not the web client)

In CRM 2016 you can use the new application handler for CRM mobile clients to directly link to CRM forms, views, and dashboards from external applications so that when you click on the link in an external application, the target element opens in CRM for phones or CRM for tablets. You can also open an empty form for creating an entity record.

If you are already signed in to your CRM instance in CRM for phones or CRM for tablets, the target record is displayed in the mobile client when you click the link in external application. Otherwise, you’re prompted to sign in to your CRM instance in the CRM mobile client, and upon doing so, the target element is displayed. You must have CRM for phones or CRM for tablets installed on your mobile device to use this feature

Example

On my tablet in my CRM for Tablets app I navigate to a dashboard I'd like to share with a collegue

I swipe up from the bottom to see the commands available and press "Share Link"

A task pane flows in from the right and offers me to share the link via mail (and more). I click "Mail"

The Mail app kicks in with a mail prepopulated with the deep link to dashboard in the CRM for Tablets (and the web link if needed). I can type a few sentences and send the mail.

On the recipient side the recipient receives the mail in his/her email app on the tablet

note that the link contains the ID for the dashboard (read more here "Open forms, views, and dashboards in CRM mobile client with a URL" - link)

Clicking the links prompts a dialog asking if you want to switch apps (from the Mail app to the CRM for Tablets app)

Clicking yes opens the CRM for Tablets app...

... with the dasboard

See also

  • CRM for Tablets on iTunes - link

Studien des Fraunhofer ESK zum geschäftlichen Einsatz von „Skype“ sowie zu „Skype for Business“

$
0
0

In seiner Studienreihe zur Nutzung von „Skype“ im Unternehmensumfeld hat das Fraunhofer ESK die Studie „Einsatz von Skype im Unternehmen – Chancen und Risiken“ aktualisiert. Zudem haben die ESK-Forscher in einer separaten Kurzstudie eine Einschätzung zu „Skype for Business“ abgegeben.

Skype“ bietet Kommunikation über das Internet und wendet sich vorwiegend an Privatpersonen. Wird „Skype“ trotzdem im geschäftlichen Umfeld eingesetzt, kommen die ESK-Forscher zu folgendem Ergebnis: „Für den Austausch sicherheitsrelevanter und geschäftskritischer Informationen wird Skype prinzipiell nicht empfohlen!“ Der Grund für diese Einschätzung liegt darin, dass die Infrastruktur und vor allem die Verschlüsselung bei Skype nicht unter Kontrolle des jeweiligen Unternehmens sind.

In einer separaten Studie geben die ESK-Forscher eine Einschätzung von „Skype for Business“ ab. Das bisher in der Regel lokal in Unternehmen installierte Kommunikationssystem ist eine Weiterentwicklung des ehemaligen Angebots „Microsoft Lync“. „Skype for Business“ kann grundsätzlich als Alternative zu einer klassischen Telefonanlage betrachtet werden, so die Studie. Es ist aber empfehlenswert, genau zu überprüfen, ob „Skype for Business“ alle benötigten Funktionalitäten bietet. Pauschale Sicherheitsbedenken beim firmeninternen Einsatz von „Skype for Business“ sehen die ESK-Forscher nicht. Hier spielen die bei der jeweiligen Installation umgesetzten Sicherheitsmaßnahmen die entscheidende Rolle.

Die Studien stehen auf der Website des Fraunhofer ESK zum Download bereit:

www.esk.fraunhofer.de/de/publikationen/studien.html#telekommunikation

(SQL) Tip of the Day: Updating Statistics and Recompiling Stored Procedures

$
0
0

Today’s Tip…

When you run into a performance issue in SQL (On premise or SQL Azure) a common technique is to update your statistics to help improve performance. A quick way to do this is to use this script which will provide the TSQL commands to update all your stats with FULLSCAN, just copy and run the output of this query.

image

For good measure you can then recompile all your stored procedures so they can use the best query plans based on these new stats.

image

Azure Stack Technical Preview Announced!

$
0
0

Hello Readers and Viewers!

It is almost here! What we have all been waiting for…The Technical Preview of Azure Stack!

Announced yesterday by Mike Neil on the Microsoft Azure Blog, “Announcing the first Technical Preview of Microsoft Azure Stack” was the first of many, [I mean many] posts, reposts, tweets, retweets, and overall positive social sentiment around the excitement of this amazing new offering from Microsoft.

I feel honored to have been working on this project for about a year now…and I am very pleased that I get to start sharing in all YOUR #AzureStack experiences with the release of its Technical Preview starting Friday, January 29th.

Some of you have noticed that I have been pretty silent on Twitter, Blogs, Videos, etc. This has been very much on purpose. As I look back on the past year, I have only really tweeted a handful of times, more teasers than anything…and since I am reminiscing, I will highlight one of my favorites

AzureStackBlog

…an image that I had to intentionally blur at the time. But now, things are starting to get much more clear. No more intentional blurring - I mean just look at the photos posted to Twitter with the #AzureStack hashtag…

How Exciting!

So, what is next?

A few things, coming soon…

  1. I will be re-homing the http://aka.ms/AzureStack URL, to something much more official (in case you didn’t know, it currently redirects to a blog post I published on May 6th: #AzureStack @ #MSIgnite
  2. I will be tweeting and interacting on social media more, with more information, tips, tricks, etc.
  3. I will be blogging to help increase awareness of all the great stuff going on in the #AzureStack space – so much stuff coming
  4. And, one of the more exciting things for me – I will be bringing new life to my old YouTube Channel: https://www.youtube.com/user/charlesjoyMS!

Until then, as a reference, here are some of the most recent #AzureStack resources:

enJOY!

BSI: Unternehmen sollten von erfolgreichem Cyber-Angriff ausgehen

$
0
0

Der von Bundesinnenminister Dr. Thomas de Maizière und dem Präsidenten des Bundesamts für Sicherheit in der Informationstechnik, Michael Hange vorgestellte Bericht Die Lage der IT-Sicherheit in Deutschland 2015 informiert ausführlich über neu entdeckte Schwachstellen, aktuell genutzte Angriffsmethoden und -mittel sowie die Gefährdungslage bei Behörden und kritischer Infrastruktur. In seinem Fazit weist das Bundesamt darauf hin, dass die Gefährdungslage mittlerweile ein hohes Maß an Komplexität erreicht hat und die einzelnen Ursachen, Methoden und Rahmenbedingungen zunehmend voneinander abhängen und sich gegenseitig beeinflussen.

Zudem lassen sich aus dem Bericht eine ganze Reihe von – teilweise seit Jahren bekannten und empfohlenen – Maßnahmen ableiten, mit denen sich Anwender und Organisationen vor Angriffen schützen können:

  • Viele Angriffe sind nur deshalb erfolgreich, da die Anwender ein unzureichendes Patch-Management betreiben und veraltete Software nutzen.
  • Vielen Anwendern fehlt es nach wie vor an Bewusstsein für Social-Engineering-Angriffe und Manipulationsversuche, die sie per E-Mail oder Telefon erreichen. Ein gesundes Misstrauen wäre wünschenswert, Firmen und andere Organisationen sollten ihre Mitarbeiter entsprechend schulen.
  • Hersteller und Diensteanbieter tragen Verantwortung für ihre Produkte und sind verpflichtet, nach Bekanntwerden einer Sicherheitslücke diese auch zu schließen.
  • Unternehmen und Verwaltungen werden aktuell und auch in Zukunft verstärkt durch zielgerichtete Attacken bedroht, die über einen längeren Zeitraum hinweg Informationen ausspähen wollen (APT, Advanced Persistent Threat). Insbesondere international tätige Unternehmen sollten diese Angriffe in ihr Risikomanagement einbeziehen und ihre Maßnahmen zur Detektion, zum Monitoring und zur Vorfallsbearbeitung entsprechend ausrichten.
  • Die Vernetzung der IT im industriellen Umfeld bildet ein hohes Sicherheitsrisiko, insbesondere für kritische Infrastrukturen. Über eine Segmentierung der Netzwerke lässt sich die Sicherheit erhöhen und beispielsweise verhindern, dass ein Angriff auf das Büronetz Auswirkungen auf Steuerung und Fertigung hat.
  • Die Zahl der Schadprogramme für stationäre wie auch für mobile Geräte steigt weiter an. Da die Verteilzyklen immer kürzer werden, sollten signaturbasierte Security-Lösungen häufige Aktualisierungen anbieten. Die Anwender sollten sich bewusst sein, dass die meisten Infektionen nach wie vor von Spam-Mails ausgehen.
  • Unternehmen und andere Organisationen sollten sich die Frage stellen, welche wirtschaftlichen Folgen ein Cyber-Angriff nach sich ziehen kann. Dabei sollten sowohl die Kosten für die betroffene Organisation selbst (interne Kosten) als auch für Kunden, Dienstleister und Lieferanten (externe Kosten) berücksichtigt werden.
  • Der technische Fortschritt sorgt für eine immer weitergehende Professionalisierung der Angreifer. Sie können bei ihren Attacken zunehmend auf fertige Werkzeuge, Dienstleistungen und komplette Infrastrukturen zurückgreifen. Das bedeutet zugleich, dass immer weniger Fachwissen erforderlich ist, um einen Angriff durchzuführen. Dadurch wird die Zahl der Attacken weiter steigen.
  • Da die Verantwortlichen für Cyber-Attacken oft nur schwer zu fassen sind, winkt ihnen ein hoher Ertrag bei verhältnismäßig geringem Risiko. Auch dadurch wird sich die Gefährdung weiter erhöhen.

Zum Schluss heißt es in dem sehr lesenswerten Bericht: „Statt einer reinen Abwehr gegen Angriffe gehört es zum Risikomanagement einer Organisation, sich darauf einzustellen und darauf vorzubereiten, dass ein IT-Sicherheitsvorfall eintritt oder ein Cyber-Angriff erfolgreich ist (Paradigma: Assume the Breach). Dazu müssen Strukturen geschaffen, Verantwortlichkeiten benannt und Prozesse geübt werden, wie mit einem anzunehmendem Vorfall umzugehen ist.“ Dieser Einschätzung ist an sich nichts hinzuzufügen – außer: Es wird Zeit, dass Unternehmen diesen Ratschlag beherzigen.

Gastbeitrag von Michael Kranawetter, National Security Officer (NSO) bei Microsoft in Deutschland. In seinem eigenen Blog veröffentlicht Michael alles Wissenswerte rund um Schwachstellen in Microsoft-Produkten und die veröffentlichten Softwareupdates.

Viewing all 17778 articles
Browse latest View live




Latest Images