Предыдущие версии программы Visio позволяли вставлять фотографии в фигуры на организационной диаграмме – но только по одной фигуре за раз. Visio 2013 позволяет импортировать набор фотографий либо в существующую организационную диаграмму, либо при создании диаграммы из внешних данных. В последнем случае можно импортировать фотографии из папки на компьютере или на сервере. Кроме того, при импорте данных организации из Microsoft Exchange Server имеется возможность импортировать фотографии из профилей пользователей Microsoft Outlook.
...(read more)Добавление фотографий и изменение стилей организационных диаграмм
Tip o' the Week #233 – When I'm moving windows
As the nights are already drawing in, UK domestic interest in international football has long waned to background tolerance (apart from tabloid cannibal fever), massive new TV sales and beer supply forecasts drop to any normal summer level, we must amuse ourselves in other pursuits. Maybe, perusing old Tips o’ the Week could be one of them?
ATS Andrew Warriner commented in email, that he sees lots of people struggling to move windows around when projecting during meetings (dragging between the two screens being offered in an extended display). Well, it’s a topic ToW has covered in part before, but it’s always good for a refresher.
If you only have one screen in front of you, try pressing WindowsKey + LEFT or RIGHT arrow to snap your current window to the left or right side of the screen (or unsnap it back to normal). WindowsKey + UP or DOWN will maximise, restore or minimise the current window.
When you’re working on multiple screens (the default when you plug in a 2nd monitor or projector), just press WindowsKey + SHIFT + LEFT or RIGHT to switch the current window between your PC screen and the projected one.
Displaying an Excel spreadsheet in a window that you’d like to show off? Try Wnd+SHIFT+LEFT immediately followed by Wnd+UP, and you’ll not only have flicked the window to the big screen, you’ll have maximised it too, all in a matter of half a second. A Productivity Superhero you shall become, hmmm.
Andrew also suggested that you might want to switch off the taskbar showing in the 2nd screen, by right-clicking on the Taskbar, choosing Properties and switching off the “Show taskbar on all displays” check box.
More shortcut fun can be found here, and here.
Use PowerShell to Report on Exchange Online
Summary: Use the Windows PowerShell cmdlets to generate Exchange Online information.
Microsoft Scripting Guy, Ed Wilson, is here. This morning, I have a meeting with my manager, and this afternoon, I am meeting with a couple of teammates. The meetings will bookend another wonderful day of Windows PowerShell coolness. In honor of the meeting this morning, I made a nice pot of Darjeeling tea, and I put in a bit of peppermint and spearmint and cinnamon stick. A little bit of orange peel rounds out the flavor. It almost feels extravagant to use a very nice Darjeeling tea and add in the spices, but hey, some people add milk and sugar to the tea, so I feel that if I do not overpower the smooth earthy flavor, I am not being disrespectful.
I also scored some nice chocolate with 90-percent cocoa, and we have a standing order for macadamia nuts, so I am all set to be productive. In between the meetings today, I plan to spend some time playing around with the Exchange Online reporting cmdlets.
New is old, or old is new…something like that
One of the things about working with Exchange Online in Office 365 is the feeling that things are different. But for an experienced Exchange Server admin who knows Windows PowerShell, the similarities will far outstretch the differences. For example, this picture of a crustacean that I took while scuba diving in Aruba a few years ago shows that although it seems different, there are more similarities than differences among crustaceans.
With Exchange, make a remote connection
One of the cool things about working with Exchange is the ability to make a remote connection. In fact, I do not have to install an Exchange management module—I simply make a connection to the remote server. I then import the session (this is called implicit remoting) into my Windows PowerShell session, and I can now work as if I were on the remote server.
Note For a good introduction to working with Exchange Online, refer to Use PowerShell To Manage Exchange Online in Office 365.
Because this is a basic task, I decided that I would write a script that makes this connection for me. The first line of the script imports my credentials from an XML file that I stored on my local computer.
Note For more information about storing credentials in an XML file, refer to Getting Started with Office 365 and PowerShell.
After I have imported my credentials, I create a variable that stores the location to the Windows PowerShell management portal for Exchange Online. You will need to verify this location for yourself, and you can find some help with this in the previously mentioned introductory blog post.
The next thing I do is create a new PSSession by using the connection URI that I derived and my credentials and authentication options. I then import the PSSession. Here is my script for connection to Exchange Online:
$cred = Import-Clixml C:\fso\ScriptingGuyCredential.xml
$exol = "https://outlook.office365.com/powershell"
$ex = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri $exol `
-Credential $cred -Authentication basic -AllowRedirection
Import-PSSession $ex
Now that I have made my connection to Exchange Online, I can easily run any of the Office 365 reporting-related Windows PowerShell cmdlets. For more information about these cmdlets, see: Office 365 Reporting web service and Windows PowerShell cmdlets.
For example, I can run the Get-MailboxActivityReport cmdlet from the interactive window at the bottom of my Windows PowerShell ISE. If I do not include any parameters, everything returns to the interactive window as shown in the following image:
I can use some of the parameters of the cmdlet to set the begin and end dates of the report and to specify the type of report (daily, weekly, monthly, yearly). The output fields are the same as those from the web service, as are the types of reports. Therefore, it is important to look at the documentation for the web service at MailboxActivity* reports. The primary documentation for the service is detailed on this site, and it is not replicated in the cmdlet docs.
Rather than enter the documentation via the Windows PowerShell cmdlet report page, I like to select the specific web service page, because it includes more information. Towards the bottom of the page, if a corresponding Windows PowerShell cmdlet exists, I will find a link to that cmdlet.
If a Windows PowerShell cmdlet exists, there is not too much reason for me to mess around with the Web Reporting service and spend a decent amount of time to create a script. The only advantages of writing a script to directly interact with the web service is that it can be faster, and there is no need to first establish an implicit remoting session and restrict myself to that imported session.
If I am using the Windows PowerShell cmdlets instead of calling the web service, I still use the docs. For example, here are the docs for the MailDetailSpam report on MSDN: MailDetailSpam report. The docs tell me everything I need to know about the report.
Here is an example of running the Get-MailDetailSpamReport cmdlet:
Now you have some background about using Office 365 cmdlets. This also brings to a close Office 365 Week. Join me tomorrow when I will have a guest blog post from Honorary Scripting Guy, Sean Kearney, about Active Directory.
I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.
Ed Wilson, Microsoft Scripting Guy
PowerShell Script to list Software Updates in a Software Update Group
Hi everybody,
In a recent Workshop that I was teaching I got asked how to list all of the security updates in a software update group. So I wrote a quick PowerShell script to do exactly that.
Here is the code
############################################################################################
$modulelocation = 'F:\Program Files\Microsoft Configuration Manager\AdminConsole\bin\configurationmanager.psd1'
$SUG = 'Security Updates'
Import-Module $modulelocation
CD PRI:
$SoftwareUpdates = (get-cmsoftwareupdategroup | Where {$_.LocalizedDisplayName -eq $SUG}).Updates
Foreach ($SoftwareUpdate in $SoftwareUpdates){
(Get-CMSoftwareUpdate -Id $SoftwareUpdate).LocalizedDisplayName
############################################################################################
You will just need to change the two initial variables
$Modulelocation to where your psd1 sits. See Matts blog for details on this.
$SUG to the name of your Software Update Group.
This will simply list all of the updates so you can paste it into any Change Request you need to create for Software Updates.
Hopefully you find this useful but more than that hopefully this gets you started with some PowerShell. A fantastic free course that I always recommend to my students if you not sure where to begin is this MVA course run by Jeffrey Snover and Jason Helmick.
Getting Started with PowerShell 3.0 Jump Start
Feel free to comment with your own useful PowerShell script or even a new improved version of mine below…
微軟官網 Microsoft.com 20周年,歷年改版截圖回顧
根據 Microsoft Blog Fire Hose,微軟官網 Microsoft.com已經邁入20年了。
1994年,20年前,當時全球僅有數千個網站,而目前全球網站的數量已有超過10億個網站。這20年中,全球 Microsoft.com 的拜訪次數一直維持著世界前10名。而根據台灣ARO/MMX公佈2014年6月Media Metrix網路流量報告 (資料來源為創市際®市場研究顧問公司),Microsoft Sites 也在台灣網域主到達率前10名中。
相較於早期簡單純文字的設計,Microsoft.com 也已經進化使用 Responsive Web Design (RWD) 響應式網頁設計的方式來製作網頁,也就是可以跨平台跨裝置來瀏覽 Microsoft.com,能自動針對不同螢幕尺寸的裝置調整網頁最佳的呈現方式,讓使用者操作經驗更好。
下面是歷年來,Microsoft.com 官網改版的截圖回顧。
1994~1995
1995/Aug
1995/Nov
1996/Aug
1998
1999
2001
2003
2010
2012
2014
20140808,微软八月安全补丁提前通知
作为微软每月补丁发布的一部分,微软会在补丁发布前一周向用户提供有关补丁的相关信息,包括补丁数量、受影响软件和严重等级等。此通知的目的是希望能够更好地帮助用户安排补丁部署计划。
在2014年8月13日,Microsoft 计划发布9个安全公告,以下是其简介:
公告 ID | 最高严重等级 | 漏洞影响 | 重新启动要求 | 受影响的软件 |
补丁1 | 远程执行代码 | 需要重新启动 | Microsoft Windows, Internet Explorer | |
补丁2 | 远程执行代码 | 可能要求重新启动 | Microsoft Windows | |
补丁3 | 远程执行代码 | 可能要求重新启动 | Microsoft Office | |
补丁4 | 特权提升 | 可能要求重新启动 | Microsoft SQL Server | |
补丁5 | 特权提升 | 需要重新启动 | Microsoft Windows | |
补丁6 | 特权提升 | 需要重新启动 | Microsoft Windows | |
补丁7 | 特权提升 | 可能要求重新启动 | Microsoft Server Software | |
补丁8 | 绕过安全功能 | 可能要求重新启动 | Microsoft Windows, Microsoft .NET Framework | |
补丁9 | 绕过安全功能 | 需要重新启动 | Microsoft Windows |
在补丁发布之前我们不排除以上信息有变动的可能。
提前通告网页:具体安全公告摘要可以在该页面中找到详细信息:https://technet.microsoft.com/zh-cn/library/security/ms14-aug(English)
Microsoft Windows 恶意软件删除工具:Microsoft 将会在 Windows Update、Microsoft Update、Windows Server Update Services 和下载中心上发布 Microsoft Windows 恶意软件删除工具的更新版本。
微软全球技术支持中心安全技术部
Practical use of Call Quality Methodology
This post is authored by Henrik Jørgensen from Microsoft Services in Denmark.
The following is based upon real life experience with the CQM framework from a Microsoft Consulting Services project.
One of our customers reported bad experiences with Lync. Specifically the customer reported, that their end-users complained about problems with Audio and Video. The problems reported could be divided into 2 scenarios:
- In PC to PC communication, where 2 end-users communicated via the Lync 2010 client.
- In Conferences hosted on the Lync 2010 platform, where multiple end-users participated in a conference call.
The customer is a major global player in their specific area. They host several Lync pools globally and are represented in countries around the globe.
An approach to analyze the above problems is to use the Call Quality Methodology (CQM) framework as introduced by the Microsoft Lync product group.
The approach was to establish a baseline, in order to understand the level of the problems but also have a benchmark to compare with after implementation of changes to the Lync environment and related IT infrastructure components.
We divided the work into the following areas:
- Client analysis
- Bios and device drivers
- Patch level of the OS in use
- Server Analysis
- Compare hardware in use vs. Lync Server hardware requirements
- Bios and device drivers
- Patch level of the Server OS
- Lync Key Health Indicators as defined in the Lync Networking Guide
- CQM analysis
- Use the CQM SQL queries to analyze the data in the QoE database
- Definition of persona profiles for Lync usage
- Logical map of the Network topology
- Bandwidth estimate calculation
- Network Assessment for UC
The work involved several IT teams at the customer. A key learning was, that the operation of a complex IT-infrastructure as Lync calls for co-operation and communication between the IT teams involved in operations and maintenance of the Lync infrastructure.
The analysis work revealed several findings:
- Clients
- The Bios level needed to upgraded on some pc's
- It was necessary to deploy newer versions of drivers for Network Interface Cards in the clients
- Servers
- The Hardware requirements was met
- Newer BIOS and firmware was needed at the server level
The KHI, CQM and Network Analysis revealed other findings. These are presented in more detail in the following.
Server Key Health Indicators
We used the KHI collection PowerShell script from the networking guide. We collected data for 5 working days. Afterwards, the data was imported to Microsoft Excel.
Among other critical findings, we observed packet loss on some of the front-end servers. This called for further analysis of the server problems. A firmware upgrade was part of the solution.
CQM Analysis
The CQM SQL queries are divided in 3 areas
- Endpoints – the methodology will help to determine whether there are problems with the client pc's or the devices connected to these.
- Server to server / gateway traffic – in order to document whether the Lync media servers are healthy or not. Furthermore, the methodology can document whether conditions on the servers are contributing to packet loss and Jitter.
- Network – the methodology will document which LAN subnets the Lync poor calls are coming from.
We used the queries in the Networking guide. A summary of the findings are provided below:
Endpoints
The CQM queries revealed that a majority of end-users at given locations did not use Lync certified devices. The customer initiated a process to
- Provide the end-users with certified devices
- Learn the end-users to use the devices
Server traffic
The CQM queries documented packet loss between the AVMCU and the Mediation Server and from the Mediation Server to the gateway at some sites. Further analysis looked at
- Non Lync Software on the servers
- Lync pre-requisites regarding antivirus exclusions
- Ensure that the network equipment are healthy and it follows Microsoft guidelines from the Open Interoperability List
- Firmware on the servers network interface card
Network traffic
We identified several issues
- All access to the Lync infrastructure was via VPN for the customers employees when working remotely. Split tunneling was not implemented.
- Some internal peer-to-peer traffic was relayed via the Edge servers.
- RTT > 500 Ms was observed on some network connections
- Packet loss on WiFi networks.
All above findings called for additional analysis and work in order to solve the problems.
Network Analysis
Together with the customer, we defined three persona profiles. These where defined in the bandwidth calculator.
The customers HR department provided us with a number for each of the persona profiles at the specific locations where the customer is represented.
The calculations in the bandwidth calculator revealed:
- Possible overflow of the QoS queue allocated for Lync traffic
- Locations where more bandwidth was needed to handle the Lync traffic
Furthermore, the customer initiated a network assessment of the WAN. The assessment documented the predictions from the bandwidth calculator.
Customer initiated actions to improve the Lync experience
The customer initiated several actions to improve the Lync experience of their end-users. In summary these are
- Acquiring more WAN bandwidth for given locations
- Implementation of Quality of Service on the network
- Renewal of network equipment at given locations
- Improvements to existing WiFi implementations at given sites
- Training of the end-users
- Proactively use of the CQM methodology in order to monitor improvements
Key learnings
With the CQM approach, we helped our customer to not only troubleshoot and fix problems with their Lync infrastructure. We also established a methodology that is used pro-actively in their environment to prevent problems in Lync communications internally as well as with external parties.
A key learning is that CQM is a very good framework, but the value from it can be very limited, if the processes at a customer are not aligned to CQM and a proper Lync service mapping is not in place. Furthermore, the different IT teams at the customer needs to communicate very close about operation and maintenance of the IT infrastructure.
[Script Of August 8] How to add IE favorites to OneDrive in Windows 8
![]() | ![]() | |
![]() Aug. 8 | ![]() |
Script Download:
The script is available for download from http://gallery.technet.microsoft.com/How-to-add-IE-favorites-to-183db0ea. You can also use Microsoft Script Browser for Windows PowerShell ISE to download the sample with one button click from within your scripting environment.
The goal of this script is synchronizing IE favorites to OneDrive.
On Windows 8.1 Internet Explorer 11 can synchronize favorites. However there is no functionality on older systems such as Windows 7. This script is to synchronize IE favorites with OneDrive.
You can find more All-In-One Script Framework script samples at http://aka.ms/onescriptingallery
Friday with International Community Update - Technet üzerinde bulunan community'ler
Herkese Merhaba, bu benim ilk blog yazım ve her Cuma olduğu gibi bugün de International Community Update günü.
Bugün size Microsoft Technet üzerinde bulabileceğiniz toplulukları yazacağım.
Türkiye WikiNinjas Topluluğu: http://blogs.technet.com/b/wikininjastr/
İngilizce WikiNinjas Topluluğu: http://blogs.technet.com/b/wikininjas/
Portekiz WikiNinjas Topluluğu: http://blogs.technet.com/b/wikininjasbr/
Fransa WikiNinjas Topluluğu: http://blogs.technet.com/b/wikininjasfr/
Teşekkürler ve Hayırlı Cumalar!
-Turkish wiki ninja Alican
Linkler için referans: http://blogs.technet.com/b/wikininjas/archive/2014/06/27/here-come-the-french-wiki-ninjas.aspx
Creating a Windows Server 2012 DHCP Failover Relationship
Here's my quick and simple recipe for creating a Windows Server 2012 DHCP failover relationship.
The following ingredients are required:
- two Windows Server 2012 DHCP servers with the DHCP role installed
- two or more scopes already configured as type DHCP on the first DHCP server
All set? Good, let's cook...
Let's configure a Hot-Standby relationship with the first DHCP server as the primary server. First, get the scopes on HALODHCP01:
$Scopes=Get-DhcpServerv4Scope-ComputerNameHALODHCP001
Next, create the Hot-Standy relationship:
Add-DhcpServerv4Failover-Name"HALO_FAILOVER"-ScopeID$Scopes[0].ScopeID -ComputerNameHALODHCP01-PartnerServerHALODHCP02-ServerRoleActive-AutoStateTransition$True-SharedSecret"8DKSZfF31Q"-Force
Now, check the failover out:
Get-DhcpServerv4Failover-ComputerNameHALODHCP01
Here's stage two - add any additional scopes to the failover. First, get a list of scopes, ignoring the one we used to create the failover:
$ScopeObjects=$Scopes|Select-Object-Skip1
Next, add those scopes to the failover relationship:
$ScopeObjects|ForEach-Object {Add-DhcpServerv4FailoverScope-Name"HALO_FAILOVER"-ComputerNameHALODHCP01-ScopeId$_.ScopeID}
Finally, list the scope that are part of the relationship:
(Get-DhcpServerv4Failover-ComputerNameHALODC02).ScopeID.IPAddressToString
Quick and easy, unlike Lobster Thermidor.
Lumia Appetizer August 2014
Mehr Kacheln, mehr Möglichkeiten, mehr Spaß: Mit dem Lumia Cyan-Update profitieren Lumia Besitzer seit kurzem von neuen Windows Phone 8.1-Funktionen, einer weiter optimierten Übersichtlichkeit - und natürlich von attraktiven neuen Apps.
Neuvorstellungen und „App“dates aus den letzten Wochen sorgen insbesondere bei allen Fitness-Fans für Freude und auch echte Sparfüchse kommen auf ihre Kosten. So bereichern Anwendungen wie Fitbit oder mydealz das Angebot des Windows Phone Stores.
Effizientes Networking bedarf ständiger Aufmerksamkeit. Gut, wenn die XING App alles Wissenswerte frei Haus liefert. Den perfekten Überblick über alle relevanten Tarifdetails und Optionen von ALDI TALK bietet die ALDI TALK App. Weitere interessante Neuerscheinungen in diesem Monat sind Wunderlist Beta, BBM Beta, der Limousinen-Service Uber sowie DreamWorks Dragons Adventure.
App-Vorstellung des Monats
Fitbit: Fitness und ausreichend Schlaf sind beste Voraussetzungen für einen gesunden Körper. Mit der Fitbit App behalten Lumia-Besitzer stets den Überblick über ihren Fitnessgrad und können zudem ihre Ernährungs- und Schlafgewohnheiten positiv beeinflussen. Ein kompakter Fitness-Tracker am Handgelenk oder auch in der Hosentasche synchronisiert Schritt- oder Kalorienverbrauchsmessungen in Echtzeit mit dem Windows Phone. Dort haben Nutzer ihre Erfolge mit der Live-Kachel auf dem Lumia unmittelbar im Blick. Alle, deren sportlicher Ehrgeiz erst im Wettbewerb so richtig aufblüht, können ihre Freunde mit der App zu spannenden Fitness-Challenges herausfordern.
mydealz: Mit der mydealz App auf dem Lumia verpassen Schnäppchenjäger nie wieder den perfekten Deal. Angebote aus Deutschlands größter Shopping Community werden in Echtzeit aufs Smartphone übertragen. Der Nutzer kann bestimmte Produkte definieren und wird dann kontinuierlich über neue Angebote oder Tiefstpreise informiert. Und wer unterwegs einen tollen Deal gefunden hat und diesen mit der Community teilen möchte, kann das ebenfalls direkt über die App tun.
App-Update des Monats
Twitter: Das Update für die Twitter App beinhaltet einige wichtige Optimierungen. So kann ab sofort noch mehr Content deutlich schneller mit den Followern geteilt werden. Zum Beispiel lassen sich jetzt mehrere Fotos in einem einzigen Tweet hochladen. Außerdem gibt es eine Vorschau, in der die eigene Timeline mit Foto-Tweets, Vine Videos oder anderen Inhalten bestückt wird.
Tools & Produktivität
Wunderlist Beta: Listen erleichtern den Alltag, besonders dann, wenn man sie direkt mit anderen teilen kann. Mit der Wunderlist Beta App können ganz leicht To-do-Listen erstellt und mit definierten Personengruppen geteilt werden. Was muss heute eingekauft werden, was fehlt noch vor der Abreise in den Urlaub – mit dieser App behält man den Überblick, ob in der Freizeit oder bei der Arbeit. Synchronisiert wird in Echtzeit und mit der Kommentarfunktion können sich Gruppenmitglieder zu ihren Listen austauschen – falls man sich doch mal uneinig sein sollte, wer die Einkäufe oder den Abwasch erledigt.
BBM Beta: Instant-Chats, Bilder teilen, Nachrichten senden und empfangen – BBM bringt all das und noch mehr zusammen. BBM-Nutzer sind jederzeit auf dem neuesten Stand, denn sie können mit den BBM Gruppen ihrer Wahl Inhalte teilen und vielfältig kommunizieren. BBM Nutzer erfahren, wann ihre Nachricht zugestellt und auch gelesen wurde, werden sofort über eine Antwort informiert und chatten mit mehreren Personen gleichzeitig. Dabei genießen sie die Sicherheit einer maximal geschützten Privatsphäre. Die spezielle Windows Phone-Version von BBM kommt im modernen Windows Phone Design und bietet viele attraktive Funktionen.
XING: Die kostenlose XING App macht das beliebte Netzwerk überall verfügbar. Den Nutzern bietet sie nicht nur potenzielle Kontakte zu mehr als 14 Millionen Mitgliedern, sondern auch zahlreiche praktische Funktionen wie etwa den Zugriff auf aktuelle Kontaktdaten, kontinuierliche Updates sämtlicher Kontakte sowie die Möglichkeit, gezielt Personen und Unternehmen auf XING zu finden. Praktisch: Neben dem mobilen Update der eigenen Daten können auch die Besucher eingesehen werden, die auf der Profilseite waren.
ALDI TALK: Wer ein Lumia besitzt und über ALDI TALK mobil telefoniert und surft, kann seinen Tarif mit der passenden ALDI TALK App jetzt noch einfacher überblicken. Außerdem kann er ihn jederzeit flexibel von unterwegs aus an die jeweiligen Bedingungen anpassen, zwischendurch den Guthabenstand abfragen oder aufladen, Datenpakete hinzubuchen oder den Status von Tarifoptionen einsehen.
Freizeit & Spiele
Uber: Mobil sein hat viele Dimensionen – und Uber trägt mit seinem Service entscheidend dazu bei. Nur zwei Klicks auf der Uber App genügen, und ein Fahrer kommt innerhalb weniger Minuten, um den Nutzer zum gewünschten Ziel zu chauffieren. Das Ganze funktioniert On-Demand, es gibt also weder Vorbestellungen oder Reservierungen, noch sind größere Wartezeiten einzurechnen. Auf Bargeld können Lumia-Besitzer mit der Uber-App übrigens verzichten: gezahlt wird mittels einer verschlüsselt hinterlegten Kreditkarte.
DreamWorks Dragons Adventure: Wer hat nicht schon einmal davon geträumt, auf einem leibhaftigen Drachen durch die Nachbarschaft zu reiten? Auch eine Wikingerversion der realen Welt ist für manchen eine echte Versuchung. Möglich macht’s DreamWorks Dragons Adventure: World Explorer: ein innovatives Spielerlebnis, das echte Fantasy in die reale Welt bringt. Der Spieler kann bei seinem Auftrag, der Befreiung von Drachen, nicht nur die ganze Welt überfliegen, sondern auch bekannte Wahrzeichen und Straßen seiner eigenen Nachbarschaft. Dazu muss er das Spiel lediglich unterwegs spielen – am besten natürlich mit einem Lumia.
Posted by Pina Meisel
Communications Manager
Förderprogramme Teil 4: Programme für Frauen
Gemeinsam. Visionen. Realisieren.: Gewinnen Sie jetzt ein Ticket zur Deutschen Partnerkonferenz 2014
Unter dem Motto „Gemeinsam. Visionen. Realisieren.“ erwartet Sie die Deutsche Partnerkonferenz in diesem Jahr vom 30. September bis 01. Oktober im Congress Center Rosengarten in Mannheim. Aus erster Hand erfahren Sie hier alles, was es an Neuigkeiten, Strategien und Initiativen für Microsoft Partner gibt. Nutzen Sie die großartigen Netzwerkmöglichkeiten vor Ort, um Ihr Business gewinnbringend voranzutreiben.
Sie wollen in Mannheim dabei sein und mit uns gemeinsam Visionen realisieren? Dann haben wir eine gute Nachricht für Sie! Wenn Sie es schaffen, 10 der insgesamt 17 Fragen unserer Aussteller richtig zu beantworten, dann haben Sie die Chance eins von drei Tickets zur Deutschen Partnerkonferenz zu gewinnen.
Schicken Sie uns einfach bis 05. September die richtigen Antworten an dpk_gewinnspiel@microsoft.com und Sie nehmen automatisch an der Verlosung der Tickets teil.
Und hier kommen die Fragen unserer Austeller:
Welchen runden Geburtstag feiert ADN in diesem Jahr?
Wo befindet sich der Hauptsitze der ALSO Deutschland GmbH?
Aus wie vielen Personen besteht das Management-Team von cloudpartner.de?
Wie heißt das auf Microsoft Lync basierende UCC-Produkt der Deutschen Telekom für Partner?
In welchem Land hat Fujitsu seinen Hauptsitz?
Wofür steht das Kürzel GBS?
Welcher Käfer ist auf dem Logo von GWAVA abgebildet?
Welches Jubiläum feiert der Intel® Pentium® Prozessor im Jahr 2014?
Wo befindet sich der Hauptsitz des Unternehmens Infopulse?
Wie heißt das Partnerprogramm von Plantronics?
Wie viele ineinander verschlungene Kreise formen das Polycom-Logo?
Welches Jubiläum feiert Raber+Märcker in diesem Jahr?
- Verwandeln Sie jede Umgebung in Ihr Büro - Wie heißt die neue mobile Konferenzlösung von Sennheiser?
Wo ist der Firmensitz der Firma Sonus Networks?
Wie lautet die URL der umfassenden Tech Data Microsoft Online Informationsplattform?
Welches Produkt von Vision Solutions ermöglicht Ihnen, von VMware zu Hyper V oder von AWS zu Azure zu migrieren?
Wie heißt der Slogan der windream GmbH, mit dem am Ende der Kommunikationsmittel geworben wird?
Viel Glück beim Suchen und Finden. Wir freuen uns, die Gewinner auf der Deutschen Partnerkonferenz begrüßen zu dürfen. Weitere Informationen zur DPK 2014 finden Sie auf unserer Veranstaltungsseite.
Teilnahmebedingungen
1) Teilnahmeberechtigt sind alle im Microsoft Partner Network teilnehmenden Partner (Registered Member oder Kompetenz Partner). Von der Teilnahme ausgenommen sind Mitarbeiter von Microsoft und deren Angehörige sowie Amtsträger und für den öffentlichen Dienst besonders Verpflichtete.
2) Für die Teilnahme senden Sie bitte 10 richtige Lösungswörter per E-Mail an dpk_gewinnspiel@microsoft.com. Alle Einsendungen mit 10 richtigen Lösungen der Fragen zu unseren Ausstellern nehmen am Gewinnspiel teil. Jeder Teilnehmer darf zu jeder Frage einmal teilnehmen. Mehrere Emails von demselben Teilnehmer zu einer der Fragen werden von der Verlosung ausgeschlossen.
3) Einsendeschluss ist der 05.09.2014 23.59 Uhr. Alle Einreichungen mit 10 richtigen Lösungen, die bis 05.09.2014 23.59 Uhr bei dpk_gewinnspiel@microsoft.com eingehen, nehmen an der Auslosung teil. Es entscheidet der Eingangszeitpunkt der E-Mail.
4) Die E-Mails mit insgesamt 10 richtigen Lösungswörtern nehmen an der Verlosung teil.
5) Zu gewinnen gibt es 3 Eintrittskarten für die Deutsche Partnerkonferenz.
6) Alle Gewinner werden per E-Mail benachrichtigt.
7) Die Gewinner erklären sich mit einer Veröffentlichung ihrer Namen auf der MPN Facebook Seite einverstanden.
8) Die Ausbezahlung der Preise in bar ist nicht möglich.
9) Der Rechtsweg ist ausgeschlossen.
10) Gewährleistungsansprüche hinsichtlich der Gewinne sind gegenüber Microsoft Deutschland ausgeschlossen.
Disclaimer
Durch meine Teilnahme am DPK-Gewinnspiel willige ich in die Speicherung meiner Daten durch die Microsoft Corporation in den USA, der Microsoft Deutschland GmbH sowie anderen Microsoft Niederlassungen weltweit ein. Die erhobenen Daten dienen einzig der Auslosung und Benachrichtigung der Gewinner; sie werden nicht zu anderen Werbezwecken genutzt oder an Dritte weitergegeben
8/22 Yammerセミナーございます。
月例のYammerセミナーです。
飛び込み参加もOKです。(立ち見になってしまうかもしれませんが)どうぞお気軽にお越しください。
*****以下抜粋(8/8時点)
Yammer カンファレンス 【第 3 回】 なぜ企業にソーシャルが必要なのか? エクセレントカンパニーにおける人材活用と社内 SNS の実際
マイクロソフト 品川グランドセントラルタワー 30 階 Open Seminar Circle
JR 品川駅 港南口よりスカイウェイにて直結 徒歩 3 分、 京浜急行 品川駅よりスカイウェイにて直結 徒歩 6 分
なぜ企業にソーシャルが必要なのか? エクセレントカンパニーにおける人材活用と、社内 SNS の展開
■参加費: 無料
■主 催: 株式会社CSK Winテクノロジ
[ アジェンダ ]
1.なぜ今エンタープライズ ソーシャルが必要か?
集権型から自律分散型のネットワークへ。これからの働き方とメディアのカタチ。
スピーカー: 前田 直彦/株式会社CSK Win テクノロジ/コミュニティ アーキテクト
2.ソーシャル メディアの業務利用に向けての実践~公私混合時代の新しい働き方~
2005 年 社内 SNS 黎明期から、これまで数多くの企業の SNS 展開にかかわり、NEC においては 6 万 5000 人が利用する SNS に育てあげたノウハウを一挙公開
スピーカー: EGM フォーラム主査 福岡 秀幸
3.Yammer という働き方
マイクロソフト社内での、Yammer 活用の実際。コミュニケーションの壁を取り除き、従業員どうしが情報と意見を交換できる場を提供することで、ビジネスの成長を牽引。
スピーカー: 荒木 秀一/日本マイクロソフト株式会社/カスタマーサクセスマネージャー
4. ソーシャルが起こした組織革命~日経電子版はこうして生まれた~
新聞業界の未来プロジェクトで唯一成功している「日経・電子版」。その基本設計を担当した坪田知己氏は、1984 年からパソコン通信にのめりこみ、そこでの社外の人との交流の中から、新事業の構想を練り上げた。「会社のため」ではなく、「ユーザーが求めている新しいメディアの姿」を自社の戦略にした。社長と対立してもひるまなかった坪田氏の信念の背景は「ソーシャル」だった。
スピーカー: 坪田 知己
※ 1972 年日本経済新聞入社。記者を経て経営企画を担当し、アメリカオンラインとの提携など、日経のインターネット事業の基本戦略を描き、現在の「日経・電子版」の枠組みを作る。2009 年に定年退職し、その感、慶應義塾大学、京都工芸繊維大学で特任教授など。地域活性伝道師として、各地で文章講座を開催している。日本のメディア業界では最も急進的な論客でもある。近著は『サービス文明論』(アマゾン)
Creating Yammer Relying Party Trust in ADFS
Yammer supports any SAML 2.0 compliant Identity Provider (IdP). These include ADF 2.0 & 3.0, Shibboleth, OneLogin, PingFed, Okta etc. The first step to implementing Single Sign-On with Yammer is to open a yammer SSO service request. You will be asked to provide your IdP metadata and token signing certificate so you may want to speed up the data collection phase by providing the required files as soon as you open the service request. In the case of ADFS IdP, the metadata can be downloaded from - https://your-server-URL/FederationMetadata/2007-06/FederationMetadata.xml and you may use this instruction to extract the token signing certificate. You should zip the token signing certificate before sending it to the Yammer support rep. because .cer files are quarantined by Microsoft IT.
Once you’ve provided the required files, Yammer support will create the connection and respond back with the Yammer Service Provider (SP) metadata. At this point, the connection is inactive and there will be no impact on users' experience. Download the metadata file and start the process of creating the relying party trust and claims rule as described below.
Add Relying Party Trust
- In ADFS Management Console, navigate to Trust Relationships ->Replying Party Trusts
- Right Click Relying Party Trusts and click Add Relying Party Trust
- Click Start
- Select "Import data about the relying party from a file", browse to the metadata file that was provided by Yammer Support and click Next.
- Add a Display name (and Notes if necessary) and click Next
- Multi-factor authentication is not required, click Next
- Permit all users to access this relying party, Next.
- Check the Edit Claim Rules box
Edit Claim Rules for Relying Party Trust
- In the Issuance Transform Rules tab, click Add Rule
- In the Select Rule Template, leave "Send LDAP Attributes as Claims" selected
- In the Configure Claim Rule page, enter Get attributes in the Claim rule name box
- In the Attribute store list, select Active Directory
- In the Mapping of LDAP attributes create the following mappings: E-mail-Addresses - SAML_SUBJECT
- Click Finish
Testing your Single Sign-On Connection
Why have a dog and bark yourself
Back in the sixties my Dad worked in an IT department where there were about a hundred people just to operate the one ICL mainframe in that data centre. These operators had banks of lever arch files containing instructions to handle every aspect of the day to day running of this environment from changing tapes, to setting up and executing programs like the weekly payroll run.When I started my IT career in the eighties I could a lot of this setup in shell scripts on my Unix Data General server and I could look after backups and updates all by myself and of course the kit was much more reliable. Moving forward to today we seem to have lost some of these scripting skills and seem to be content to use the UI.
However if you want to manage servers at hyper-scale (1 IT admin to every 1,000 +VMs) then logging into each one and changing them is simply not efficient enough. Also this approach is just as inefficient at smaller scales - say just ten VMs because maintenance will only be done occasionally and the tools will be unfamiliar meaning that changes will take longer than they need to and possibly lead to errors. If you have read any of my stuff or seen me present over the last year you’ll know the solution is PowerShell. If that was true a year ago it’s even more relevant now as a couple of interesting technologies have quietly been released that enhance management of virtual machines and services...
- PowerShell 4 has introduced the concept of desired state configuration where an xml schema is used to establish what the state of a server should be and then this can be used either to test or enforce that configuration on a given set of servers. At the simplest level this could be a set of features and setting on a given server, through to ensuring that given files and versions of applications are also installed. This is useful in setting up load balanced web servers which must be identical and also for Session Hosts in Remote Desktops Services.
- Windows Azure Pack allows you to run the management portal used by Microsoft to allow you to create services in Azure on your own servers. It builds on System Center 2012R2, specifically Virtual Machine Manager and Orchestrator, but makes calls to these services using an adjunct to the Azure Pack called System Center Automation or SMA. This is PowerShell based but is a classic 3 tier service of a load balancer with worker roles processing tasks which is driven by a database backend (SQL Server or MySQL). This is an important distinction because while normal PowerShell will fail if the server it’s executing from fails SMA PowerShell Runbooks (as distinct from those written in Orchestrator) are resilient. The PowerShell itself is quite different for example it can roll back to designated checkpoints within a script if a failure occurs and then be rerun from there. The Azure Pack also allows you to fully package a Virtual Machine based on a Virtual Machine Template but here you can inject packages to run inside the VM after it’s created and accept the parameters from the Gallery Wizard just like you can in Azure. It’s also possible to quickly create your own gallery images on Azure itself in much the same way..
Azure Portal show the end user experience of creating a VM
The basic thrust of this is that where until now Microsoft has given you the nut and bolts of Cloud OS for you to automate your own data center there is now more of a focus in managing your data center in exactly the same way as Azure which means that one IT admin could potentially manage thousands not just hundreds of VMs. Even if you don’t have that scale then you’ll get time back and be more agile. That is important as I talked to more than one customer who still have to wait weeks for VMs to be provisioned on VMWare and actually that’s not a VMWare problem per se, that’s an IT department that hasn’t got their heads around process standardisation and automation. So in those cases the tech savvy user simply fires up VM’s Amazon Google or Microsoft and bypasses the IT department road block. This then grows into a bigger problem as they build trust in those platforms and so more work will head off to the cloud meaning the over controlling IT admins have lost the very thing they wanted – control!
Desired State Configuration (DSC) Nodes Deployment and Conformance Reporting Series (Part 1): Series agenda, Understanding the DSC conformance endpoint
In any configuration management process, once the configuration is applied to the target systems, it is necessary to monitor these systems for any configuration drift. This is an important step and Desired State Configuration (DSC), which is addressed differently depending on the chosen deployment mode for DSC - push mode or pull mode.
In the push method for configuration delivery, the configuration MOF file is copied manually or via another solution to the target machine, and the Start-DscConfigurationcmdlet provides an immediate indication of success or failure of the configuration change.
Now, in large scale deployments, it is likely you will want to look at the pull method of configuration delivery. In this mode, the target systems receive or download configuration from a pull service, based on their ConfigurationID. This can either be a SMB file share or a REST-based pull service endpoint. When using the REST-based pull service endpoint, to facilitate monitoring of configuration enact process, we can deploy a conformance endpointthat provides the last configuration run status from the target node.
This blog series will focus on some examples of how to optimize pull mode configuration deployment, and how to report on the health of DSC nodes in such an environment
This whole series has been a joint effort with guest blogger Ravikanth Chaganti. It is our first series of posts with Ravi, but he’s been PowerShell MVP since 2010 and a regular contributor to PowerShellMagazine.com. He also publishes on his own blog. More specifically in this series, Ravi is behind the updated configuration to simultaneously deploy the pull service and conformance endpoints, and how to query and report with the conformance endpoint. It’s been a pleasure to work on this content with Ravi, and we look forward to any potential collaboration in the future!
Blog post series agenda
There are 4 blog posts in this series:
- Series agenda, Understanding the DSC conformance endpoint (this post)
- How to deploy a pull service endpoint and automate the configuration of the DSC nodes
- How to leverage the conformance endpoint deployed along with part of the pull service endpoint, to report on accurate configuration deployment and application: Are my DSC nodes downloading and applying the configurations without encountering any errors?
- Some options to determine if the nodes are conformant with the given configuration: Are my DSC nodes conformance with the configuration they are supposed to enforce?
Note : This last blog post will be published at a later date
Understanding the conformance endpoint
You may wonder why there are separate posts to report on the status of configuration being downloaded/applied (blog post #3), and to report on enforcement (upcoming blog post #4). This relates to something that is critical to understand before implementing the conformance endpoint: As of today, the conformance endpoint retrieves status about nodes as they download/apply configurations – or fail to do so. While this first level of information is important (understanding that configuration application should work if this first process is “green”), it does not provide status about whether a node is actually compliant regarding the configuration it is supposed to enforce. Blog post #4 will be published later in the series, and look at new capabilities coming soon in Windows Management Framework (WMF) and DSC, to surface the actual configuration health and drifts, as well as sample ways to work with them, with both the conformance server and other systems, so stay tuned!
Desired State Configuration (DSC) Nodes Deployment and Conformance Reporting Series (Part 2): Deploying a pull service endpoint and automating the configuration of the DSC nodes
In this post, we will cover how a pull service endpoint can be installed, and how nodes can be configured to point to this server and retrieve their DSC configurations.
There are already a few blog posts regarding the installation of the pull service endpoint, including this post that shows a snippet on how to deploy pull service and conformance endpoints via a DSC configuration…So, you might wonder why we’re having a new one here!
Well, today’s post…
- includes an updated working snippet that combines both deployments (pull service endpoint and conformance endpoint), also updated to include the needed Windows Authentication dependencies that have been discussed in the blog comments
- also covers one example of how to overcome one of the challenges when configuring nodes for pull service endpoint, which is managing the GUIDs for the nodes.
So, Here are the steps we are going to go through in this post:
- Check prerequisites to install the pull service endpoint
- Deploy/configure the pull service endpoint
- Provisioning configurations for the nodes
- Configuring the nodes to point to the pull service endpoint
- Checking nodes are applying the configuration
- We’ll do this last step manually and on a single node in this post, and then move to the capabilities offered by the conformance endpoint to do this at scale in a larger environment, in the3rd blog post
Checking prerequisites to install the pull service endpoint
Windows Management Framework (WMF) 4.0 is a prerequisite to leverage DSC so, to make things easier, we will be deploying our pull service and conformance endpoints on a Windows Server 2012 R2 machine, which includes WMF 4.0 out of the box.
You will also need the DSC Resource Kit from this link.
The DSC Resource Kit comes as a zipped package, and you just have to copy its content into the $env:ProgramFiles\WindowsPowerShell\Modules folder on the future pull/conformance server.
Configuring the pull service endpoint
Here is the script you would need to run on the server, from ISE for example. In our situation, this was run on a server called DSCSERVER, as seen in line 54.
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 | configuration Sample_xDscWebService { param ( [string[]]$NodeName = 'localhost', [ValidateNotNullOrEmpty()] [string] $certificateThumbPrint = "AllowUnencryptedTraffic" ) Import-DSCResource -ModuleName xPSDesiredStateConfiguration Node $NodeName { WindowsFeature DSCServiceFeature { Ensure = "Present" Name = "DSC-Service" } WindowsFeature WinAuth { Ensure = "Present" Name = "web-Windows-Auth" } xDscWebService PSDSCPullServer { Ensure = "Present" EndpointName = "PullSvc" Port = 8080 PhysicalPath = "$env:SystemDrive\inetpub\wwwroot\PSDSCPullServer" CertificateThumbPrint = $certificateThumbPrint ModulePath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules" ConfigurationPath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration" State = "Started" DependsOn = "[WindowsFeature]DSCServiceFeature" } xDscWebService PSDSCComplianceServer { Ensure = "Present" EndpointName = "DscConformance" Port = 9090 PhysicalPath = "$env:SystemDrive\inetpub\wwwroot\PSDSCComplianceServer" CertificateThumbPrint = "AllowUnencryptedTraffic" State = "Started" IsComplianceServer = $true DependsOn = @("[WindowsFeature]DSCServiceFeature","[WindowsFeature]WinAuth","[xDSCWebService]PSDSCPullServer") } } } Sample_xDscWebService -ComputerName "DSCSERVER" Start-DscConfiguration -Wait -Verbose .\Sample_xDscWebService |
A few notes regarding this script:
- This configuration simultaneously deploys the conformance endpoint that we will use later in the blog post series, to see how the nodes are doing when downloading and applying their assigned DSC configurations.
- The conformance endpoint uses Windows Authentication and therefore the WinAuth Windows feature needs to be installed. In our configuration script, we used the DependsOn property to take care of the dependencies for the conformance endpoint
- Note that the xDSCWebService still refers to the conformance endpoint as “compliance endpoint” (and actually enforces it in the URL, even if you were to rename PSDSCComplianceServer to another value). DSC components are being transitioned to the updated “conformance endpoint” name, that we prefer to use now and throughout this blog post series.
- Finally, the last few lines are just here to apply the configuration.
Here is the output of the script running, with the future URIs highlighted, for the two web services:
We can also see that the content for the two websites has been created in the WWWROOT folder on the server:
Finally, running Get-DscConfiguration shows that the configuration has been applied, if we still had any doubts
Provisioning configurations for the DSC nodes
On the DSC server, here is a script that will do the following:
- Line 21: The script receives a list of nodes to configure – In this sample, this is in the form of an array, but you could very well query Active Directory, a CMDB, a custom database, etc.
- Lines 23-30: For each node, it does generate a GUID that will be used to make this configuration unique for each node, generates a MOF file for each node.
- The configuration applied is here called “TestConfig” and is detailed at lines 1-19. This is just a very basic sample configuration that ensures that the content of a shared folder is copied locally to the temp folder on the local node
- Also note how the Node/GUID association is added to a CSV file at line 29. This will be important when we configure the node at the next step, and is there to ensure the node has a location to query its GUID when configuring its LCM, without any manual intervention. The CSV approach makes it easy to show the content as a blog post sample, needless to say that leveraging a database or a more reliable/secure approach would be preferred, as discussed in the community.
- Line 32-39: A checksum is generated for each file, and all files generated are copied to the pull service configuration store, so that they are made available for the future nodes
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 | Configuration TestConfig { Param( [Parameter(Mandatory=$True)] [String[]]$NodeGUID ) Node $NodeGUID { File ScriptPresence { Ensure = "Present" Type = "Directory" Recurse = $True SourcePath = "\\storagebox\SourceFiles\SCCM Toolkit" DestinationPath = "C:\Temp\DSCTest" } } } $Computers = @("DSCNODE1", "DSCNODE2") write-host "Generating GUIDs and creating MOF files..." foreach ($Node in $Computers) { $NewGUID = [guid]::NewGuid() $NewLine = "{0},{1}" -f $Node,$NewGUID TestConfig -NodeGUID $NewGUID $NewLine | add-content -path "$env:SystemDrive\Program Files\WindowsPowershell\DscService\Configuration\dscnodes.csv" } write-host "Creating checksums..." New-DSCCheckSum -ConfigurationPath .\TestConfig -OutPath .\TestConfig -Verbose -Force write-host "Copying configurations to pull service configuration store..." $SourceFiles = (Get-Location -PSProvider FileSystem).Path + "\TestConfig\*.mof*" $TargetFiles = "$env:SystemDrive\Program Files\WindowsPowershell\DscService\Configuration" Move-Item $SourceFiles $TargetFiles -Force Remove-Item ((Get-Location -PSProvider FileSystem).Path + "\TestConfig\") |
When the script runs, it creates the MOF files and shows the checksums (because of the –Verboseswitch):
The files are present in the DSC pull service configuration store, including our CSV file:
And here is the content of the CSV file:
Applying configuration on the DSC nodes
The goal here will be to be as dynamic as possible, so that a single generic PS1 file could be sent to the DSC nodes, and “discover” the configuration to apply. The script could be send via the method of your choice, including software distribution tools like Configuration Manager as part of the System Center suite.
Here is the script we will be using:
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 | Configuration SimpleMetaConfigurationForPull { Param( [Parameter(Mandatory=$True)] [String]$NodeGUID ) LocalConfigurationManager { ConfigurationID = $NodeGUID; RefreshMode = "PULL"; DownloadManagerName = "WebDownloadManager"; RebootNodeIfNeeded = $true; RefreshFrequencyMins = 15; ConfigurationModeFrequencyMins = 30; ConfigurationMode = "ApplyAndAutoCorrect"; DownloadManagerCustomData = @{ServerUrl = "http://DSCSERVER.contoso.com:8080/PullSvc/PSDSCPullServer.svc"; AllowUnsecureConnection = “TRUE”} } } $data = import-csv "\\dscserver\c$\Program Files\WindowsPowershell\DscService\Configuration\dscnodes.csv" -header("NodeName","NodeGUID") SimpleMetaConfigurationForPull -NodeGUID ($data | where-object {$_."NodeName" -eq $env:COMPUTERNAME}).NodeGUID -Output "." $FilePath = (Get-Location -PSProvider FileSystem).Path + "\SimpleMetaConfigurationForPull" Set-DscLocalConfigurationManager -ComputerName "localhost" -Path $FilePath -Verbose |
Some important parts of the scripts are:
- The configuration (lines 1-20): This set the LCM for pull mode, and specifies which pull service endpoint to use. It also specifies if we should just monitor DSC configurations, or try to auto correct them. In this sample, we apply and auto-correct. The frequency is also specified here.
- In the configuration, note that we need to specify the GUID for the ConfigurationID parameter. This is why we created that CSV file, so that the script can “discover” which GUID to use, at lines 22 and 24.
- Note: The CSV file is directly accessed via the administrative share, to keep things simple in this sample. In reality, it would likely be on a secured shared elsewhere. Or, as we discussed earlier, you might be using a custom database or a CMDB to store this data instead of this CSV sample.
- The LCM configuration is compiled at line 30 and applied at line 32
This is the output of the script running on a node:
When we display the LCM configuration, we can see that the pull service endpoint is now configured in the LCM:
Checking that configurations are being applied to nodes
After the interval (or, for testing purposes, you can force things with a reboot, or via scripting), we can see the configuration pulled, in the event log – This is for the node called DSCNODE1, and you can see how the GUID matches what we had previously.
Note how the node did not need to pull specific modules in this case, but the pull service endpoint can provide modules when a node needs them to apply a specific configuration.
Finally, we can confirm that the folder was created, with content copied by DSC. And if we were to delete this folder, it will be copied again by DSC.
Note : You can also leverage the xDscDiagnostics module for some of these, as needed.
We’ve now checked that everything is working on a single node. In the next post in this series, we will look at how the conformance endpoint can be used to look at the status of configuration downloads/applications across nodes.
Blog post series agenda
- Series agenda, Understanding the DSC conformance endpoint
- How to deploy a pull service endpoint and automate the configuration of the DSC nodes (this post)
- How to leverage the conformance endpoint deployed along with part of the pull service endpoint, to report on accurate configuration deployment and application: Are my DSC nodes downloading and applying the configurations without encountering any errors?
- Some options to determine if the nodes are conformant with the given configuration: Are my DSC nodes conformance with the configuration they are supposed to enforce?
Note : This last blog post will be published at a later date
Desired State Configuration (DSC) Nodes Deployment and Conformance Reporting Series (Part 3): Working with the conformance endpoint
This blog post covers how to deploy/configure, and work with the conformance endpoint. It includes details about the type of information returned, as well as sample ways to generate reports with meaningful data.
Configuring Conformance Endpoint
Similar to the pull service endpoint, we can use the xDscWebService resource from the DSC resource kit to configure a conformance endpoint. The configuration used to deploy both endpoints is available in the first post of this series. Note the requirements (xPSDesiredStateConfiguration module namely, also included in the DSC Resource Kit – This is also explained in the first blog post)
Note As of today, the conformance endpoint needs to be deployed on the same system as the pull service endpoint. This is because the status of each target node gets stored in an Access database (devices.mdb) on the system that is configured as pull service endpoint. The conformance queries the same database for the target node status.
Exploring Conformance Endpoint
Once the configuration is complete, we can access the conformance endpoint at http://<computername>:<portnumber>/<EndpointName>/PSDSCComplainceServer.svc. So, from our example, this URL will be http://localhost:9090/DscConformance/PSDSCComplianceServer.svc. So, if everything worked as expected, we should see similar output from the endpoint as shown here:
The Status method provides the configuration run status for each target node that is configured to receive configuration from the pull service endpoint. If we access the Status method, we will see browser output similar what is shown below:
Make a note of the highlighted section (bottom-right corner) in the previous screenshot. This shows how many target systems are available in the pull service inventory. If a pull client hasn’t received any configuration from the pull service endpoint, it won’t get listed in the Status method output. The output that we see in the browser isn’t any helpful. However, we can use this browser method to understand more about the Status method and what type of output we can expect from the method. This is done using the meta-data operation.
To see the meta-data from the Status method, append $metadata to the conformance endpoint URL.
The XML output seen in the above screenshot gives an overview of all properties that will be a part of Status method output. Here is a quick summary of these properties and what they mean.
Property Name | Description |
TargetName | IP Address of the pull client |
ConfigurationId | GUID configured as the ConfigurationID in the meta-configuration of the pull client |
ServerChecksum | Value from the configuration MOF checksum file on the pull service endpoint |
TargetCheckSum | Value of the checksum from the target system |
NodeComplaint | Boolean value indicating if the last configuration run was successful or not |
LastComplianceTime | Last time the pull client successfully received the configuration from pull service |
LastHeartbeatTime | Last time the pull client connected to the pull service |
Dirty | Boolean value indicating if the target node status is recorded in the database or not |
StatusCode | Describes the Node status. Refer to PowerShell team blog for a complete list of status codes. |
We can see the values of these properties using the Invoke-RestMethodcmdlet to query the oData endpoint.
001 002 003 | $response = Invoke-RestMethod -Uri 'http://localhost:9090/DscConformance/PSDSCComplianceServer.svc/Status' -UseDefaultCredentials -Method Get -Headers @{Accept="application/json"} $response.value |
In the above example, I have specified –UseDefaultCredentials switch parameter. This is required as the Conformance endpoint uses Windows Authentication. The Value property from the web service response includes the output from the Status method for each target node.
Understanding Conformance status and Reporting
As you see in this output, everything seems to be working fine in my deployment and all target systems are in perfect sync with the pull service endpoint. Once again, as explained in the introduction for this blog post series, the NodeCompliantproperty in the output does not indicate if the target system is in desired state or not. It only indicates whether the last configuration run was successful or not. So, let us test that by placing a buggy configuration MOF for one of the target nodes. For demonstration purposes, I will create a configuration script to include a custom DSC resource that does not exist on the target system. So, when this configuration gets received on the target node, it should fail because of missing resource modules.
001 002 003 004 005 006 007 008 009 010 011 012 013 014 | Configuration DummyConfig { Import-DscResource -ModuleName DSCDeepDive -Name HostsFile Node '883654d0-ee7b-4c87-adcd-1e10ea6e7a61' { HostsFile Demo { IPAddress = "10.10.10.10" HostName = "Test10" Ensure = "Present" } } } DummyConfig -OutputPath "C:\Program Files\WindowsPowerShell\DscService\Configuration" New-DscCheckSum -ConfigurationPath "C:\Program Files\WindowsPowerShell\DscService\Configuration\883654d0-ee7b-4c87-adcd-1e10ea6e7a61.mof" -OutPath "C:\Program Files\WindowsPowerShell\DscService\Configuration" |
Once I changed the configuration on the pull service endpoint, I ran the scheduled task manually to pull the configuration and it fails as the HostsFile resource does not exist on the pull service endpoint for the target system. So, at this moment, if we look at the Status method again, we should see that the NodeCompliant status will be set to False. To get the right statuscodevalue, we need to run the schedule task again or wait for the pull client to connect to the pull service again.
As you see in this output, the NodeCompliant state is set to False and the StatusCode is set to 10. So, what is StatusCode 10? From the PowerShell team blog, I understand that it means there was a failure in getting the resource module. Wouldn’t it be good if I can see the text description of the code instead of an integer value? Also, the IP address as a TargetName won’t make much sense to me. So, when I generate a report, I’d like to see the computername of the target system instead of IP address. How can we achieve that?
Yes, with a little bit of PowerShell!
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 | $statusCode = @{ 0='Configuration was applied successfully' 1='Download Manager initialization failure' 2='Get configuration command failure' 3='Unexpected get configuration response from pull service endpoint’ 4='Configuration checksum file read failure' 5='Configuration checksum validation failure' 6='Invalid configuration file' 7='Available modules check failure' 8='Invalid configuration Id In meta-configuration' 9='Invalid DownloadManager CustomData in meta-configuration' 10='Get module command failure' 11='Get Module Invalid Output' 12='Module checksum file not found' 13='Invalid module file' 14='Module checksum validation failure' 15='Module extraction failed' 16='Module validation failed' 17='Downloaded module is invalid' 18='Configuration file not found' 19='Multiple configuration files found' 20='Configuration checksum file not found' 21='Module not found' 22='Invalid module version format' 23='Invalid configuration Id format' 24='Get Action command failed' 25='Invalid checksum algorithm' 26='Get Lcm Update command failed' 27='Unexpected Get Lcm Update response from pull service endpoint’ 28='Invalid Refresh Mode in meta-configuration' 29='Invalid Debug Mode in meta-configuration' } $response = Invoke-RestMethod -Uri 'http://localhost:9090/DscConformance/PSDSCComplianceServer.svc/Status' -UseDefaultCredentials -Method Get -Headers @{Accept="application/json"} $response.value | Select @{Name='TargetName';Expression={[System.Net.Dns]::GetHostByAddress($_.TargetName).HostName}}, ConfigurationId, NodeCompliant, @{Name='Status';Expression={$statusCode[$_.StatusCode]}} | Format-List |
Note: These status codes come can also be found in this blog post.
Note: In case you are wondering, the computer names used in this specific demo are different than the ones in the previous blog post, because this sample was created in a different environment. It does however work with any pull and conformance endpoint, as long as you use the appropriate URI.
So, what we see now is more meaningful. In this demonstration, I have only four target systems. But, when you have more target systems, it will be good to see some sort of visual indication for systems that have issues applying or working with configurations. For starters, we can use PowerShell to generate something. We can use some simple HTML to build this.
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 | Function Get-DscConformanceReport { param ( $Uri = 'http://localhost:9090/DscConformance/PSDSCComplianceServer.svc/Status' ) $response = Invoke-RestMethod -Uri $uri -UseDefaultCredentials -Method Get -Headers @{Accept="application/json"} $NodeStatus = $response.value | Select @{Name='TargetName';Expression={[System.Net.Dns]::GetHostByAddress($_.TargetName).HostName}}, ConfigurationId, NodeCompliant, @{Name='Status';Expression={$statusCode[$_.StatusCode]}} #Construct HTML $HtmlBody = "<html><body style='font: Calbiri;'>' $TableContent="<Table><tr style='color: #fff; background: black;'><th>TargetName</th><th>ConfigurationId</th><th>NodeComplaint</th><th>Status</th></tr>' foreach ($Node in $NodeStatus) { if (-not ([bool]$Node.NodeCompliant)) { $TableContent += "<tr style='color: #fff; background: red; border=1;'>' } else { $TableContent += "<tr style='border=1;'>' } $TableContent += "<td>$($Node.TargetName)</td><td>$($Node.ConfigurationId)</td><td>$($Node.NodeCompliant)</td><td>$($Node.Status)</td></tr>" } $TableContent += "</table>" $HtmlBody += $TableContent + "</body></html>" #Generate HTML file $HtmlBody | Out-File "$env:Temp\DscReport.HTML" -Force Start-Process -FilePath iexplore.exe -ArgumentList "$env:Temp\DscReport.HTML" } |
What the function generates is not a fancy HTML report. It just highlights all rows with NodeComplaint set to False. I am pretty sure that people with good JavaScript kills can beautify this report and include many other details.
In the first release of DSC, the conformance endpoint gives only limited information as we saw so far. For starters, the current functionality is good to understand if configuration run itself is complete or not, how many target systems are available in the deployment and so on. Blog post #4 in this series should be published in a few weeks, and will look at new capabilities coming soon in Windows Management Framework (WMF) and DSC, to surface the actual configuration health and drifts, as well as sample ways to work with them. In the meantime, you can work around the current limitations by using the CIM methods offered by LCM and build custom reports based on those results. And, if you are familiar with writing ASP.NET web applications and services, you can deploy you own endpoints to perform more than what the conformance endpoint provides.
Blog post series agenda
- Series agenda, Understanding the DSC conformance endpoint
- How to deploy a pull service endpoint and automate the configuration of the DSC nodes
- How to leverage the conformance endpoint deployed along with part of the pull service endpoint, to report on accurate configuration deployment and application (this post) : Are my DSC nodes downloading and applying the configurations without encountering any errors?
- Some options to determine if the nodes are conformant with the given configuration: Are my DSC nodes conformance with the configuration they are supposed to enforce?
Note : This last blog post will be published at a later date
Free Partner Practice Enablement Training in a city near you!
Partners,
Check out the following business development and technical training options for Azure, taking place in a few select U.S. cities. The first description below is for a one-day Azure business development training. The next description is for the technical training. If you can make it to one of the U.S. cities below (or if you click on “Register Here” you’ll see several worldwide cities as well) sign up today, as space is limited!
Azure Business Development Training U.S. locations | Azure Technical Training U.S. locations |
New York, September 3 Silicon Valley, September 10 Dallas, October 28 | New York, September 3 |
Get Ready for Business as a Service Training! To successfully overcome the challenges of the Cloud, we have designed a one day Get ready for Business as a Service training to help you understand and develop the right business model for your market and support you in entering and growing the business in the emerging Cloud marketplace. This training is designed for a 1-to-many delivery based upon best practices from real life experiences from more than 120 cloud business enablement engagements with IT providers. It is a step-by-step practical program, delivered with close considerations to the market(s) SIs are operating in. Agenda-Setting the stage: Brain Power session #1 Cost No fee is charged for attending the training. A light catering will be provided. Students are responsible for securing their own accommodations, and covering all expenses related to their own travel arrangements. Questions If you have any questions, please send an email to ppebootcamp@microsoft.com NOTE: This invitation does not guarantee a seat. Seats are confirmed at a first come, first served basis. | Where and When: For a complete list of locations and dates, please visit our website and Audience: FormatandLevel: |
QuestionsIf you have any questions, please send an email to ppebootcamp@microsoft.com NOTE: This invitation does not guarantee a seat. Seats are confirmed at a first come, first served basis. | ![]() |
Michael Kophs
Partner Technology Strategist
Microsoft