Quantcast
Channel: TechNet Blogs
Viewing all 17778 articles
Browse latest View live

Windows XP: April 8th - Almost Here!

$
0
0

For the past couple of years, Microsoft has been advising customers of the planned end of extended support date for Windows XP. We’ve even been using a countdown clock on the Windows XP page (http://www.microsoft.com/en-us/windows/enterprise/end-of-support.aspx ) In fact, you’ve probably also been made aware of or have seen first-hand the end of notifications that are now popping up on Windows XP machines. You may have also recently read this as well:

http://blogs.windows.com/windows/b/windowsexperience/archive/2014/03/03/new-windows-xp-data-transfer-tool-and-end-of-support-notifications.aspx

The update KB 2934207 (Information Here - http://support.microsoft.com/kb/2934207) also adds in a notification prompt (which some in the press have affectionately referred to it as the “Death Notice.”)

If you are not seeing this update, it is likely because your Windows XP machine is being managed by WSUS, or Configuration Manager, or through the cloud with Windows Intune. Only Windows XP machines (Windows XP Home and Professional editions) who receive updates via Windows\Microsoft Update will see these notifications.

If for some reason you are receiving these notices and you would like to disable them, you can do so in the registry under the one of the following keys:

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion

or

HKCU\Software\Microsoft\Windows\CurrentVersion

Set the value of DisableEOSNotification (DWORD) to 1 to disable notifications. ) enables it.

Regardless of this change, the fact remains that end of all support except for custom support agreements is still April 8, 2014. If you are still running Windows XP in *ANY* form (physical desktops, VDI, MED-V, etc.) this affects you. Without a CSA, you will receive no further security updates and you run a risk of being vulnerable after that date. Also bear in mind that if you are virtualizing Internet Explorer 6, 7, or 8 with any non-Microsoft application virtualization solution, you will be indirectly affected as well.

Consumers, and Small-to-Midsize customers looking to update, can receive special offers and discounts via out Get2Modern page here: http://www.microsoft.com/en-us/windows/business/retiring-xp.aspx

A Custom Support Agreement (CSA) requires a Premier Services Agreement with Microsoft. If you are current an enterprise customer with a Premier contract, we have been making some changes to the Windows XP Custom Support Standard Program, which provides critical security updates, technical assistance and continued support for the product after April 8th. Please contact your Technical Account Manager (TAM) for more information.

Please note. This applies to Windows XP and NOT Windows XP Embedded. Windows XP Embedded is a different operating system designed for specialized OEM embedded devices and it has always ran on a different support lifecycle ending in 2016, which has been in place for a while in spite of what you may have read in articles out there on the Internet.


Don’t fly blind, measure your apps from the inside

$
0
0

If you have an app in the Windows Store and the Windows Phone store, you see customer ratings and reviews as well as a store dashboard that provides information like adoption rates, download history, in-app purchases and sales.

But none of this tells you about the users’ specific activity within the app itself. The Windows Apps Team has the 411 today on measuring your app from the inside using telemetry and analytics.

Telemetry, or tele-metering, is automated remote measurement and data collection used for a wide variety of purposes such as tracking spacecraft, tracking wildlife and medical monitoring. Putting an app on the global market is like launching a spacecraft: Without telemetry, you’re flying blind!

So, dial your apps in. The Windows blog has all the details.

You might also be interested in:

· Tame taxes with the help of Bing Finance and other Windows and Windows Phone apps
· App of the Week: Get revved up for the Formula One season with "ESPNF1" for Windows Phone
· Stop Dr. Doom and other villains in “Avengers Alliance” – now on Windows Phone

Aimee Riordan
Microsoft News Center Staff

Crater Lake … in New Zealand?

$
0
0

clip_image002

A quick glance at today’s Bing home page and someone from the Northwest might think they’re looking at Oregon’s Crater Lake. But this is a volcanic caldera down under: New Zealand’s Mt. Ruapehu.

It’s located in the country’s oldest national park, Tongariro, in the central part of the north island. The active stratovolcano last erupted in September 2007.

Mt. Ruapehu is 9,177 feet tall, the highest point on the north island. Between major eruptions, a warm acidic lake forms in its crater, fed by melting snow.

If you need a break today to dream about far away wind-swept places, check it out.

You might also be interested in

· Bing brings Machu Picchu to you
· Find out how far it’ll take to go from A to B on Bing at a glance
· Imagine a school without walls, textbooks or teachers

Aimee Riordan
Microsoft News Center Staff

First US Music Tech Fest finds musicians and computer scientists in harmony

$
0
0

clip_image002

At the first U.S.-based Music Tech Fest, people who share interests in music and technology are getting together at the Microsoft New England Research & Development Center in Cambridge, Mass. From Friday to March 23, attendees will get an earful of demos, performances, presentations and new collaborations.

Nancy Baym, a principal researcher for Microsoft Research, will be on a panel discussion during the event.

“I will be talking about my research on musicians’ perspectives on audience interaction and relationship — and how social media have and have not changed that,” she explains. “I’ll cover how social media’s potential to foster continuous ongoing relational engagement raises interpersonal opportunities and challenges and how much work it is to balance them.”

Music Tech Fest puts in one place all the elements of the music-technology ecosystem for a single event — from performers, hackers, developers and researchers to media, startups and app creators. This amalgam is billed as a “festival of ideas,” rather than as a conference.

Head over to Inside Microsoft Research to find out more about Music Tech Fest.

You might also be interested in:

Athima Chansanchai
Microsoft News Center Staff

App-V 5: On Roaming Exclusions

$
0
0

When you use App-V with roaming profiles or a service or product that may roam integration settings of virtual applications, it was historically assumed by App-V that once a package’s extension points are laid down (or integrated,) roaming user profiles will carry it alongside the user’s catalog, keeping the two in sync.  The App-V 5 Client Integration component depends on the ability to rely on the client’s copy of the catalog to determine which extension points get generated (or re-generated.)  This is how App-V 5 Integration quickly calculates which extension points and integration links (junction points) will be needed to be created during publishing. Back in previous versions when everything was isolated into individual FSD and PKG files, it was pretty easy to integrate App-V data into your roaming user environments.

As you may note, I am purposely using the term “Roaming User Environment” – as in – a generic term that not only refers to Roaming User Profiles native to Windows, but also environments which may be roamed using Citrix UPM (User Profile Manager, AppSense UEM, UE-V, RES, Immidio, etc.) Many of these environment managers work more granularly than the standard Windows configuration. The App-V 5 client configuration allows administrators to align their roaming user environment configuration with their App-V client configuration.  Specifically, administrators identify which registry key locations under HKCU and which directory locations under %USERPROFILE% do not roam.

The App-V Client Integration component uses its Client Configuration to set and get roaming exclusions.  The exclusion lists are captured in the App-V Client Configuration using the following keys:

HKLM\Software\Microsoft\AppV\Client\Integration\RoamingFileExclusions

HKLM\Software\Microsoft\AppV\Client\Integration\RoamingRegistryExclusions

Each roaming exclusion list is a REG_SZ value which is a semicolon-separated list of paths to excluded data.  File exclusion paths are relative to %USERPROFILE% and contain no leading slash or trailing slash.  Registry exclusion paths are paths to keys relative to HKEY_CURRENT_USER and contain no leading or trailing slashes. The App-V client setup establishes a default roaming configuration for the client machine as a best effort during client installation according to these well-known Windows settings. For example, Windows never roams registry data under SOFTWARE\Classes, and may erase it on logoff, so the exclusion list set during AppV Client setup will always include SOFTWARE\Classes.

Configuration of Roaming Exclusions

Of course, one should recognize that this may not be enough. Administrators that wish to change the list of roaming exclusions from the default configuration populated during client installation can do so. Roaming Exclusions can be configured by way of:

Manual Registry Configuration: Per the information in the proceeding paragraphs, you can make adjustments by modifying HKLM\Software\Microsoft\AppV\Client\Integration\RoamingFileExclusions

And/or HKLM\Software\Microsoft\AppV\Client\Integration\RoamingRegistryExclusions

Please bear in mind that the changes you make will take effect for new users only logging onto that App-V 5 client.

PowerShell: You can use the following PowerShell Cmdlets to set roaming exclusions:

Set-AppvClientConfiguration –RoamingFileExclusions

Set-AppvClientConfiguration –RoamingRegistryExclusions

 

Please bear in mind that like everything else the CmdLet will check to see if these settings are applied and managed via GPO by checking HKLM\Software\Policies\Microsoft\Application Virtualization.  If any of the provided configuration is in the GP registry node, the cmdlet will fail.  If the group policy does not own any of the supplied configuration, the settings are written to the HKLM\Software\Microsoft. Please also Please bear in mind that the changes you make will take effect for new users only logging onto that App-V 5 client.

Group Policy Object (GPO): The MDOP ADMX templates include settings for both Roaming File and Roaming Registry Exclusions. This will enable you to pre-deploy these configurations via GPO. The ADMX template can be downloaded here: http://www.microsoft.com/en-us/download/details.aspx?id=41183

Deployment Using Installer Switch: Per http://technet.microsoft.com/en-US/library/jj687745.aspx - you can supply this configuration upon deployment of the App-V Client using the following switches:

/ROAMINGFILEEXCLUSIONS

Usage:  /ROAMINGFILEEXCLUSIONS='desktop;my pictures'

/ROAMINGREGISTRYEXCLUSIONS

Usage: /ROAMINGREGISTRYEXCLUSIONS=software\\classes;software\\clients

Administrators managing environments that don’t support roaming user profiles can disable all roaming exclusions by emptying the list using Group Policy.  This yields the best possible performance for integrated extension points because extension points are never re-integrated unless explicitly requested through manifest policy, dynamic configuration, or package updates.

The App-V 5 integration system (that creates and manages shortcuts, FTA’s, Integration Path junction points, etc.) use the roaming exclusions to force integration of extension points that otherwise appear to be up to date by maintaining this list of exclusions and comparing them at logon. At that time, for each package the user has published, all integration and extension points that package has will be checked to see whether it was integrated to a location included in the roaming exclusion lists.  If so, that extension point will be re-integrated.  Otherwise, no re-integration is necessary.

OneDrive for Business Redirection to Office 365 Overview

$
0
0
At the 2014 SharePoint Conference we announced the new OneDrive for Business SKU in addition to changes in Service Pack 1 functionality that enable IT administrators to selectively redirect their users to OneDrive for Business in Office 365 from SharePoint Server 2013. Planning The initial prerequisite steps to implementing OneDrive for Business redirection to Office 365 in Service Pack 1 is choosing the most effective identity management/federation options to suit your business needs. At minimum...(read more)

Son 1 Ay'da En Fazla Makale Yayınlayanlar TOP 5

$
0
0

Merhabalar;

Bir hafta sonu istatistikleriyle daha beraberiz. Bu hafta istatistiklerinde Son 1 ay içerisinde en fazla makale paylaşanları sizlere yayınlamaktan gurur duyuyorum.Her alanda olduğu gibi TAT takımının üst sıralarda yer alması ayrıca gurur verici bir durum .Aşağıdaki Tabloda ilk 5 listesini görmekteyiz ve TAT takımından iki üyemizin yer alması gözlerden kaçmıyor. Bu sebepten dolayı Asil Mutlu hocamıza teşekkür ediyoruz ve başarılarının devamını diliyorum.Ayrıca aramıza çok kısa süre önce Katılan Ersin Can arkadaşımızın hızlı yükselişi ayrı bir parantez içerisinde teşekkürü hak ediyor.Umuyorum takım olarak çok daha iyi işlere imza atacağız.Son Zamanlarda Benoit Jester ,Yelenah , Alan Nascimento Carlos isimleri çok sık ve tüm platformlarda duymaya başladık.Geçmiş haftalardaki istatistiklere de bakacak olursanız her zaman listede yer buluyorlar. Bu sebepten dolayı başarılarının devamını diler, teşekkür ederiz.

Bu listeye girmeyi başarmış olan değerli Wiki ailesi üyeleri  neler yapıyor diyorsanız aşağıdan profillerine bir göz atabilirsiniz.

  1. Yelenah
  2. Benoit Jester
  3. Alan Nascimento Carlos
  4. Asil Mutlu
  5. Ersin Can

ADFS és az Exchange Server 2013

$
0
0

Ahhoz, hogy az ADFS és az Exchange Server 2013 kombinációjáról beszéljünk, először tisztáznunk kell az ADFS szerepét. Exchange blog lévén, feltételezem, hogy az Exchange az ismert. :)

Mi hát az ADFS?

Mélyen él az emlékeimben az amikor 2003-ban először láttam és hallottam az ADFS-ről. Egy belső bizalmas konferenciánk volt. Ahol Bill Gates mondott egy érdekes beszédet. Ebben a beszédben rengeteg jövőbeli szolgáltatás, termék jelen volt. Nem mellesleg említés szinten volt egy olyan példa is, hogy A vállalatnál működő SharePoint Portál szolgáltatásait B vállalat felhasználói tudják használni úgy, hogy: a két vállalat között nincs hagyományos trust kapcsolat, a két vállalat között csak HTTPS kommunikáció lehetséges és B vállalat felhasználója SSO (single-sign-on) segítségével éri el a szolgáltatást. Ez utóbbi az igazán fontos. Tehát B vállalat felhasználója, reggel beér a céghez. Bekapcsolja a gépét, belép a B vállalat Active Directory címtárába, majd elindítja a böngészőjét és A vállalat SharePoint portálját böngészi úgy, hogy nem kellett újra azonosítania magát. Nem mellesleg nincs az A vállalat címtárában sem felhasználói azonosítója. Felhasználónk csak a B vállalat címtárában szerepel.

Ez a beszéd és ez a példa annyira megragadta a figyelmemet és elindította a képzeletemet, hogy utána sokáig nem tudtam ettől szabadulni. Fogalmam sincs, hogy pontosan hogyan működik és mi az a csoda ami ezt megvalósítja. De az biztos, hogy a hét további részében, amikor a hotelbe visszaértem, akkor csak ezen merengtem, hogy vajon, hogy fog ez működni. Biztos voltam benne, hogy nekem ezt meg kell értenem és foglalkoznom kell vele, mert annyira egyedi és Bill is beszél róla. Majd később megérkezett a Windows Server 2003 R2 beta verziója benne az ADFS komponenssel és azonnal lecsaptam rá. Validálni szerettem volna, hogy amiről Bill beszélt az működik-e. És tényleg. Működik. Azóta eltelt több mint 10 év és jól látjuk, hogy az ADFS alapú azonosítás és authorizáció napjaink fontos épitőköve. No nem azért mert olyan sok az A és B vállalat kapcsolata ami ezt kikénszeríti. Nem. Sajnos még ma is botladoznak, bukdácsolnak az ügyfelek a saját maguk által kötött békjókban, ha cross-org elérést kell biztosítani a szolgáltatásokhoz. Az IT, IT szemmel nézi és a legegyszerűbb megközelítést alkalmazza: vegyünk fel az AD-ba új júzert. Az IT biztonság berzenkedik, ezért csinálnak egy új forestet, amit aztán trustolnak össze-vissza. Ááá. Az IT szempontjából is rossz és a végfelhasználók szempontjából is az. Majd időről-időre aztán újrakezdik az egészet. Szóval nem, az ADFS használatát nem a vállalatok közti kommunikáció egyszerüsítésére kezdte el a világ használni. Helyette az alapja lett a felhőszolgáltatásoknak. Hiszen helyettesítsük be az A és B vállalatokat. A vállalat a Microsoft és a nálunk levő Exchange, Sharepoint stb. szolgáltatásokat biztosítjuk a felhasználók millióinak. B vállalat pedig a Te céged. Hogy fogják a Te céged felhasználói az általunk biztosított szolgáltatásokat elérni úgy, hogy nem mi csináljuk az azonosítókezelést (jelszó, jelszólejárat stb.)?

Egyszerű a válasz az előző kérdésre. Hát erre való az ADFS. Nem kell Windows NT alapú trustot kiépíteni, nincs RPC forgalom csak HTTPS és van SSO. Minden szép, minden jó.

Az azonosítási kérdések esetén mindig van egy dilemma. Ha adott Alice és Bob és Bob arra kíváncsi, hogy Alice valóban az akinek mondja magát, akkor kell egy harmadik fél, akiben Alice és Bob is kölcsönösen megbízik. A harmadik, egyben megbízható fél jelenléte az authentikációs folyamatban azt biztosítja, hogy egyszerűbben meggyőződhetünk arról, hogy Alice, valóban Alice. Ugyanis a harmadik fél látja el az azonosítási feladatot és a folyamat végén ad "valamit" Alice-nek, amit Alice nem tud elolvasni, csak Bob. Alice ezt a "valamit" átadja Bob-nak, aki eltudja olvasni azt.

Az Active Directory esetében ha ezt lebontjuk a résztvevők szintjéra akkor:

  • Az egyik résztvevő fél egy felhasználó
  • A másik résztvevő fél egy szolgáltatás mondjuk egy Web szerver
  • A harmadik fél, akiben a szolgáltatás és a felhasználó is megbízik, a tartományvezérlő

A felhasználó ha szeretné elérni a Web szervert, akkor nem a Web szerver azonosítja őt. A felhasználót azonosítja a tartományvezérlő, majd a tartományvezérlő kiállít egy Kerberos jegyet (ez a valami), amivel a felhasználó nem tud semmit sem csinálni, csak átküldeni a Web szervernek. A Web szerver képes ezt a Kerberos jegyet elolvasni és abból megtudja állapítani, hogy a felhasználó elérheti-e a szolgáltatást vagy sem.

Honnan tudja a Web szerver azt, hogy a felhasználó elérheti-e a szolgáltatást? Hát onnan, hogy a Kerberos jegyben a felhasználó SID-jei és a csoporttagsága alapján megkapott SID-ek vannak. A Web szervert ezeket a SID-eket az adott elérni kívánt erőforrás hozzáférési listáján szereplő SID-ek listájával összehasonlítja. Ebből már talán érthető, hogy miként működik. A folyamatot itt leegyszerüsítettem, nem beszélek most az Access Token generálásról.

A lényeg tehát a Kerberos esetében az, hogy a Kerberos jegyben ott vannak a SID-ek, amikből aztán az erőforrás kiszolgáló képes eldönteni azt, hogy a felhasználónak van-e jogosultsága vagy sem. Az a folyamat, amikor az erőforrás kiszolgáló eldönti azt, hogy van-e joga az adott felhasználónak azt Authorization (Authz) folyamatnak hívjuk.

Az ADFS alapú azonosítás esetében a harmadik fél az ADFS kiszolgáló. Az ADFS kiszolgálóban bízik meg mindenki. Azonban két vállalat esetében nem egy, hanem legalább két ADFS rendszerről beszélhetünk. Ugyanis A és B vállalatnak is van egy-egy ADFS rendszere. A vállalat a saját ADFS szerverében bízik meg és B vállalat is csak a saját ADFS szerverében bízik meg. Azonban a két ADFS rendszert kölcsönösen megbízhatóvá tehetjük. Ezzel a lépéssel az biztosítható, hogy A vállalat ADFS szervere által kiadott ADFS tokent, B vállalat ADFS szervere elfogadja.

Mielőtt a teljes folyamatot áttekintjük, előtte bontsuk ki egy kicsit az ADFS tokent "a valamit". Talán még emlékeztek rá, a Kerberos jegyben vagy tokenben SID-ek szerepelnek (leegyszerüsítve a képet). Az ADFS tokenben azonban elsősorban nem SID-ek vannak. Úgynevezett claim-ek szerepelnek az ADFS tokenben. Claim lehet a felhasználó bármilyen tulajdonsága. Például a telefonszámát, email címét, nevét, beosztását, főnökét stb. betehetjük 1-1 claim-be. Az ADFS tokenben aztán ezek a claim-ek szerepelnek. Természetesen a felhasználó SID-je, vagy csoporttagsága is lehet egy claim, így aztán a Kerberos jegyben szerepelő információ is betehető az ADFS tokenbe. A claim-ek használata rengeteg előnyt jelent, példák:

  • Nem kell mindenhez csoportot használni
  • Egyszerűbb hozzáférési szabályok fogalmazhatóak meg. Pl.: ha a beosztása X akkor hozzáférhet; ha a főnöke Y és a beosztás X akkor hozzáférhet stb.
  • A SID-ek csak egy AD erdőben értelmezhetőek, a claim információk viszont különböző rendszerekben is értelmezhetőek

Képzeljük el a következő architektúrát:

Ha a vállat B kliense szeretné elérni a vállalat A web szerverét és van ADFS, akkor a folyamat a következő:

  • B vállalat felhasználója megnyitja a böngészőjében a Web alkalmazást
  • A vállalatnál levő Web szerver tudja, hogy azonosítás kell az eléréséhez, azt is tudja, hogy ADFS azonosítás kell és azt is tudja, hogy B vállalatnál van ADFS szerver és annak mi a neve. Ezért aztán, egy HTTP redirect üzenettel válaszol B vállalat felhasználójának és átirányítja a B vállalat ADFS szerveréhez.
  • B vállalat ADFS szervere, Active Directory alapú azonosítással azonosítja a felhasználót. Ezt megtudja tenni egyszerűen, hiszen tartományi tag az ADFS szerver. Ez az azonosítás lehet Kerberos vagy ADFS ürlap alapú, vagy Basic azonosítás. A lényeg, hogy ezen a ponton megtörténik az azonosítása a felhasználónak.
  • B vállalat ADFS szerveréhez amikor a klienst átirányítja A vállalat web szervere, akkor az átirányításkor az is szerepel az üzenetben, hogy egyébként a felhasználó milyen alkalmazást akart elérni. Tehát amikor B vállalat ADFS szerveréhez megy a kliens, akkor azt is elmondja, hogy milyen alkalmazást akart megnyitni. Ez azért fontos, mert a B vállalat ADFS szervere, nem fog ADFS tokent minden kérésre kiállítani. Csak azoknak az alkalmazásoknak az eléréshez ad ki tokent, amit ismer. Ez azért is fontos, mert minden alkalmazásnak más és más claim-re lehet szüksége. Tehát tudnia kell azt, hogy milyen alkalmazáshoz állítja ki a tokent, mert a tokenbe azokat és csak azokat a claim-eket kell betenni, ami az alkalmazáshoz okvetlen fontos. Ezen a ponton az ADFS kiállítja a tokent és visszaküldi a kliensnek.
  • A kliens a tokent elküldi a .... na minek is? A Web szervernek? Nem. A vállalat ADFS szerverének küldi el, ami képes arra, hogy azt elolvassa. Elolvassa, kibontja és ez a token alapján kiállít egy új tokent amit visszaküld a kliensnek. Ennek két főbb oka van. Az egyik ok az, hogy az A vállalat erőforrásai például a Web szerver, csak a saját ADFS szerverében bízik meg és a többi kapcsolódó vállalat ADFS architektúrájában nem bízik. A bizalmi kapcsolat az ADFS rendszerek szintjén van. A másik ok pedig az, hogy ez a folyamat lehetőséget teremt arra, hogy A vállalat az ADFS tokenben levő claim-eket manipulálja. Képzeljük el azt, hogy B vállalatnál levő AD-ben az atuhorization folyamathoz szükséges információt X tulajdonságban tároljuk. A claim neve X lesz. Viszont az A vállalatnál az X nevű claim már foglalt. Mi ilyenkor a teendő? Hát az A vállalat ADFS-e képes arra, hogy az X claim-et "átnevezze" Y claim-re, amit a web szerver már érteni fog. Summa: ebben a lépésben az A vállalat ADFS-e kiállítja az új ADFS tokent és azt átküldi a kliensnek.
  • A kliens az új ADFS tokent átküldi a web szervernek, ami megbízik abban, hiszen a saját ADFS rendszere adta ki. Kibontja és a claim-ek alapján elvégzi az Authorization folyamatot.

A folyamat jó komplikáltnak tűnik. És úgy tűnik, hogy lassú is. A gyakorlatban azonban ha megértjük és alaposan átgondoljuk a fentieket, akkor elég nyilvánvaló a folyamat. A gyakorlatban pedig mindez nagyon gyors. Nem lassítja az elérési folyamatot a legkevésbé sem. Ami még feltűnő lehet és következik a fentiekből:

  • Az erőforrás kiszolgálónak, esetünkben a web szervernek, ismernie kell az ADFS azonosítási módszert. Hiszen már az elején tudnia kell, hogy az ADFS-hez át kell irányítani a felhasználót, valamint a folyamat végén az ADFS token formátumat ismernie kell.
  • A kliensnek értenie kell a HTTP redirectet és támogatnia kell az egész folyamatot. A gyakorlatban megkülönböztetünk Active és Passzív klienst és a folyamat más az egyes klienstípusoknál. Ennek a részleteibe most nem megyek bele.

Az ADFS témában írt szerintem egyik legjobb könyv amit a kezdéshez bátran ajánlok az itt érhető el ingyen: http://www.microsoft.com/en-us/download/details.aspx?id=28362 Az ADFS alapos ismertetését nem vállalom blog formában, csak személyes oktatásban. Ezért ezen a ponton elégedjünk meg a koncepció szintű megértéssel és a bővebb információkért javaslom a könyv elolvasását.

Miért érdekes ez az Exchange esetében?

Az Exchange Server temékünk eddig nem támogatta az ADFS alapú azonosítást. Igen így-úgy nem támogatott módon bekapcsolható az ADFS alapú azonosítás, de azok nem működnek megfelelően, nem teszteltük őket alaposan és ezért nem is támogatott megoldások. Kiváló példa erre az, hogy ami ment Exchange 2010 SP1 esetében az már SP2 és SP3 esetében nem működik. Az Exchange Server 2013 SP1 esetében azonban ez változott. Ettől a verziótól kezdődően az ADFS alapú azonosítás használható, támogatott. Tehát a hagyományos NTLM vagy Kerberos vagy űrlap alapú azonosítás mellett lehetőségünk van arra, hogy ADFS alapú azonosítást használjunk. Ennek előnyei a következők lehetnek:

  • Több AD forest esetén trust nélkül is tudunk SSO-t biztosítani
  • Az Authorization során a csoporttagság helyett, gazdagabb információkat használhatunk a hozzéférés eldöntéséhez

Jelenleg csak az Exchange Server 2013 SP1 esetében támogatott az ADFS azonosítás használata.

Hogyan kezdjek hozzá egy tesztkörnyezetben?

Kezdésnek a következő elérési úton található dokumentációnkat javaslom: http://technet.microsoft.com/en-us/library/dn635116(v=exchg.150).aspx

A legfontosabb információk és tudnivalók:

  • három claim-et kell használni: user SID, group SID és UPN
  • csak az OWA és csak az ECP elérésnél támogatott és használható az ADFS azonosítás
  • ha ADFS azonosítást használunk, akkor minden más azonosítást ki kell kapcsolni, tehát az ADFS azonosítással együtt nem mehet Integrált, Basic vagy űrlap alapú azonosítás
  • az ADFS azonosítást per szerver, per virtual directory szinten kell beállítani a set-owavirtualdirectory parancs segítségével
  • Orgnaizációs szinten be kell állítani azt, hogy mi az ADFS szerver elérési útja (innen tudja az Exchange, hogy hova kell átirányítani) és meg kell adni azt, hogy mi az ADFS token signing tanúsítvány.


SCOM - Runbook for persisting Stale Heartbeat Alerts

$
0
0

One of the more common alerts that I see in SCOM environments is the ‘Health Service Heartbeat Failure’.  All too often it is still possible to ping the system, and yet the heartbeat alert remains.  The typical fix is to simply stop the health service on the problem system, delete the contents of the Health Service State folder for the agent, and to restart the service.  In most cases this resolves the issue.  Since this is such a common occurrence I decided to create a runbook in System Center Orchestrator to automate the fix.  In order for this to work it is necessary to be running System Center Operations Manager and Orchestrator 2012 and to have the SC 2012 Operations Manager Integration pack installed in Orchestrator.  Note: this will not resolve any actual problems with failing heartbeats, it will simply clear the cache and force the agent to attempt to update policy.

Activities:

  • The first activity in the runbook is the ‘Monitor Alert’.  This will monitor for any occurrence of the alert in question.  This activity can be found in the SC 2012 Operations Manager section.  Click the button to the right of the Sever > Connection field and select your SCOM Management Server from the list.  For testing purposes I like to trigger on both New Alerts and Updated alerts.  This allows me to manually set an alert to a new state in order to let the runbook monitor for it instead of waiting for an alert to trigger.  For my runbook I created the following 3 filters:
  • Severity Equals Critical
  • Name Equals Health Service Heartbeat Failure
  • ResolutionState Equals New

  • The second activity is a simple ‘Run Program’ which will ping the system to make sure it is online.  This activity can be found in the System section.  Create a link from the ‘Monitor Alert’ to the ‘Run Program’ activity.  There should be a default include filter created on the link for Monitor Alert returns success.  In the ‘Run Program’ properties select the Command execution mod and enter the name of the computer you are running it from, in this case I used the SCOM Management server since it should be able to talk to all SCOM Agents.  In the command field we will  subscribe to data from the Monitor Alert activity.  Type ping -4, right click to the right of the new text and select Subscribe >  Published Data.  Ensure that the Monitor Alert Activity is selected from the drop down at the top and select the ‘MonitoringObjectDisplayName’ for the published data before clicking OK.  Click Finish to complete the activity configuration.

  • The third activity is ‘Get Service Status’ which will simply return the state of the HealthService.  The following activities will depend upon the state of this service.  This activity can be found in the Monitoring section.  Create a link from the ‘Run Program’ to the ‘Get Service Status’.  In the properties of the new activity we will subscribe to data from the ‘Monitor Alert’ to get the computer.  Right click in the Computer field and select Subscribe > Published Data.  Select Monitor Alert from the Activity drop down and choose the ‘MonitoringObjectDisplayName’.  The Service name will vary depending on whether you are running Operations Manager 2012 or 2012R2.  You can select the button to the right of the field to browse services on the current system or you can simply type the name of the service.  In a 2012 environment the HealthService will be named ‘System Center Management’ and in a R2 environment it will be ‘Microsoft Monitoring Agent’.  

  • For the forth activity we will actually create two different versions.  Depending on whether the previous activity showed that the HealthService was running or stopped the runbook will choose one or the other.  Here we will create 2 versions of the ‘Start/Stop Service’activity in the System section.
    • Rename the first ‘Start/Stop Service’ activity to ‘Start HealthService’. Create a link from the ‘Get Service Status’ activity to the new ‘Start HealthService’ Activity.  On the link properties change the Include filter to ‘Service status from Get HealthService Status equals Service stopped’.  In the properties for the new ‘Start HealthService’ Activity click the action button for ‘Start service’.  Right click the Computer field and select Subscribe > Published data and select the Service Computer from the ‘Get HealthService Status’ Activity.  For the Service enter the Service name used in the previous activity (either Microsoft Monitoring Agent or System Center Management).

    • Rename the second ‘Start/Stop Service’ activity to Stop HealthService. Create a link from the ‘Get Service Status’ activity to the new ‘Stop HealthService’ Activity.  On the link properties change the Include filter to ‘Service status from Get HealthService Status equals Service Running’.  In the properties for the new ‘Stop HealthService’ activity click the action button for ‘Stop service’.  Right click the Computer field and select Subscribe > Published data and select the Service Computer from the ‘Get HealthService Status’ Activity.  For the Service enter the Service name used in the previous activity (either Microsoft Monitoring Agent or System Center Management).
      (screenshots skipped since they are nearly the same as those shown above, except for the service state)

 

  • For the fifth activity we will be deleting the stale health state on the problem system.  The ‘Delete Folder’ activity can be found in the File Management section.  Create a link from the ‘Stop HealthService’ Activity to this ‘Delete Folder’.  Open the Details section type \\ in the path field and right click in the field to the right.  Select Subscribe >  Published Data.  Select the ‘Get Health Service’ Activity and choose Service computer.  The the right of this text it will be necessary to type the path to the Health Service State folder.  This will vary depending on the version of Operations Manager you are running.  In 2012 R2 the default path is ‘\c$\Program Files\Microsoft Monitoring Agent\Agent\Health Service State’.  Ensure the ‘Delete all files and sub-folders’ option is selected.

  • For the sixth and final step we will repeat the Start Service step from earlier.  Create a new ‘Start/Stop Service’ activity and rename it to ‘Start HealthService’. Create a link from the ‘Get Service Status’ activity to the new ‘Start HealthService’ Activity.  On the link properties change the Include filter to ‘Service status from Get HealthService Status equals Service stopped’.  In the properties for the new ‘Start HealthService’ activity click the action button for ‘Start service’.  Right click the Computer field and select Subscribe > Published data and select the Service Computer from the Get HealthService Status Activity.  For the Service enter the Service name used in the previous activity (either Microsoft Monitoring Agent or System Center Management).

For testing I would make sure you have a current Heartbeat Alert and set the resolution state to something other than new or manually stop the service on an agent system.  Make sure to start the runbook and set the heartbeat alert resolution state to new.  Monitor the runbook, the alert, and the SCOM Agent to ensure the process works as expected.  Good luck and I hope this helps with your re-occurring Heartbeat Alerts. 

Updating a Modern app in windows 8.

$
0
0

Updating a Modern app in windows 8.

Hello,

My name is Mayank sharma and I am a support engineer at Microsoft platforms division team. In this blog I will explain how you can update your modern apps meant for windows 8 and windows 8.1 machines that were side-loaded with the OS (also meaning that these apps are not installed from the windows store.)

In enterprise environment, administrators will typically sideload the company’s in-house apps on the standard image and then will deploy it to the windows devices. However with time the need for updating these side-loaded will app will arise; this blog walks you through about how this can be accomplished,

 Before we start, it is suggested that one goes through the following article: http://technet.microsoft.com/en-US/windows/jj874388.aspx?ocid=wc-nl-insider this article talks about what are the requirements of sideloading an app and how you can side load your app on windows devices.

Imagining that you have now gone through this article, let’s get started…

In our scenario, we have a modern app (called as app1) which we will first sideload it into the windows 8.0 machine. The appx package of the app is placed in a shared folder (\\node2\temp, though you would not like keep your enterprise apps in a temp folder in real world!)

Step 1. We will first side load app1 in the windows base image by running following powershell command.

PS.Microsoft.PowerShell.Core\FileSystem::\\node2\temp\App1\App1\AppPackages\App1_1.0.0.0_AnyCPU_Debug_Test> dism /online /add-provisionedappxpackage /packagepath:App1_1.0.0.0_AnyCPU_Debug.appx /skiplicense

Remember, there is a difference between adding a package and ‘provisioning’ a package. While adding a package simply means that only the user who has added the package will be able to use it; provisioning means that the package has been provisioned in the windows image and it will be available for every user that will log in on the windows machine after the package is provisioned.

Step 2. You can check if the app is side loaded successfully by running

PS Microsoft.PowerShell.Core\FileSystem::\\node2\temp\App1\App1\AppPackages\App1_1.0.0.0_AnyCPU_Debug_Test> Get-AppxProvisionedPackage -online |Out-GridView

 

Here is the screenshot before and after the app was installed,

While after app1 was installed, you will see something like this, Look closely for the GUID and version of the new app.

 

To put this app on test I created two standard users in active directory named as test and test2. When we tried logging with for the first time, we will see app1 neatly on the start menu.

 

 

So everything looks in place, now let’s try to upgrade this app. But before we do that, please note that the modern apps runs in the user context once a user logs on. So here is the deal, If we want to update this app, we have to update it on per user basis, so we will now update the app. So user test logged in; we need to run the following on the server.

PS Microsoft.PowerShell.Core\FileSystem::\\10.162.100.12\package\App1_1.0.0.2_AnyCPU_Debug_Test> Add-AppxPackage -Path App1_1.0.0.2_AnyCPU_Debug.appx -DependencyPath Dependencies\Microsoft.WinJS.1.0.appx

 

Now if we list all the installed packages on the server, we will see the updated version of the app as shown below:

 

 

Ok, the test user is running an updated version of app now. What will happen if a new user test2 will log in? As mentioned earlier; the app runs in user context once the user the logs in; so when we updated a package using add-appxpackage; it only updated the version of app for user test and not for test2.

 

So to make this point, this is what we see once the user test2 logs in to the windows 8 device and we list get-appxpackage; we only see the original version of the app. As shown below.

 

Now to automate this process you may have to put add-appx package into the script either put it as a scheduled task or the use it as a logon script.

Thank you for reading the blog, I hope you will find it useful.

 

Revisiting SBS 2011 Standard Migrations

Tip of the Day: Hotfixes

$
0
0

Today’s Tip…

Want a quick way to get hotfix information? Use the Windows PowerShell is to use the Get-Hotfix cmdlet.

Command :  Get-hotfix or the abbreviated ‘hotfix’

clip_image001

To check any specific hotfix if it is installed in the system I use following command.

Hotfix –id “KB Number”

clip_image002

Chat with Gary Shapiro, Award Winning Global Top Innovating Thought Leader, Author and Executive, CEO Consumer Electronics Association

$
0
0

Gary ShapiroGary Shapiro is president and CEO of the Consumer Electronics Association (CEA)®, the U.S. trade association representing more than 2,000 consumer electronics companies, and owning and producing the world's largest annual innovation tradeshow, the International CES®.

Shapiro led the industry in its successful transition to HDTV. He co-founded and chaired the HDTV Model Station and served as a leader of the Advanced Television Test Center (ATTC). He is a charter inductee to the Academy of Digital Television Pioneers, and received its highest award as the industry leader most influential in advancing HDTV. He focused on the need for and led the effort to obtain the 2009 cut-off date of analog broadcasting.

As chairman of the Home Recording Rights Coalition (HRRC), Shapiro led the manufacturers' battle to preserve the legality of recording technology, consumer fair use rights, and opposing legislation like PIPA and SOPA, harmful to a robust Internet.

Shapiro has held many exhibition industry leadership posts, and received the exhibition industry's highest honor, the IAEE Pinnacle Award.

He is a member of the Board of Directors of the Northern Virginia Technology Council and the Economic Club of Washington. He sits on the State Department's Advisory Committee on International Communications and Information Policy. He has served as a member of the Commonwealth of Virginia's Commission on Information Technology and on the Board of Visitors of George Mason University. Shapiro also has been recognized by the U.S. Environmental Protection Agency as a "mastermind" for his initiative in helping to create the Industry Cooperative for Ozone Layer Protection (ICOLP).

Shapiro leads a staff of 150 employees and thousands of industry volunteers and has testified before Congress on technology and business issues more than 20 times. In 2012, and in prior years, Washington Life magazine named him one of the 100 most influential people in Washington. Under Shapiro's leadership, CEA also annually wins many awards as a family friendly employer, one of the best places to work in Virginia and as a "green" tradeshow producer.

Shapiro authored CEA's New York Times bestsellers "Ninja Innovation: The Ten Killer Strategies of the World's Most Successful Businesses" (Harper Collins, 2013) and "The Comeback: How Innovation will Restore the American Dream"(Beaufort, 2011). Through these books and television appearances, and as a regular contributor to the Huffington Post,Daily Caller and other publications, Shapiro has helped direct policymakers and business leaders on the importance of innovation in the U.S. economy.

Prior to joining the association, Shapiro was an associate at the law firm of Squire Sanders. He also has worked on Capitol Hill, as an assistant to a member of Congress. He received his law degree from Georgetown University Law Center and is a Phi Beta Kappa graduate with a double major in economics and psychology from Binghamton University. He is married to Dr. Susan Malinowski, a retina surgeon.

To listen to the interview, click on this MP3 file link

DISCUSSION:

Interview Time Index (MM:SS) and Topic

:00:26:
Gary, can you share highlights and useful lessons learned from your long successful history of leadership, setting standards and changing policy?

:01:26:
As an internationally recognized top leader, what are your top leadership tips?

:03:21:
What are your 3-year goals for the CEA and how will they be implemented?

:07:14:
Earlier you talked about the international CES and it's a premiere event in the world. Can you talk about some of your longer-term goals for that event?

:08:53:
What are your views on global challenges and their solutions?

:11:21:
What are your top tips for innovation and entrepreneurship?

:14:24:
What areas continue to surprise you?

:16:44:
You have already mentioned some innovations in prior questions. Are there any other disruptive innovations that you see coming up?

:19:29:
What are the top growth regions internationally based on your experiences?

:22:24:
What kind of improvements would you like to see in policy in the next two years in your country and internationally?

:24:30:
If you were conducting this interview, what question would you ask, and then what would be your answer?

Microsoft's Leslie Lamport Wins the ACM Turing Award akin to the Nobel Prize of Computing

$
0
0

Extracted from this news release, the ACM (Association for Computing Machinery) named Leslie Lamport, a Principal Researcher at Microsoft Research, as the recipient of the 2013 ACM A.M. Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages. He devised important algorithms and developed formal modeling and verification protocols that improve the quality of real distributed systems. These contributions have resulted in improved correctness, performance, and reliability of computer systems.

The ACM Turing Award, widely considered the “Nobel Prize in Computing,” carries a $250,000 prize.

ACM President Vint Cerf noted that “as an applied mathematician, Leslie Lamport had an extraordinary sense of how to apply mathematical tools to important practical problems. By finding useful ways to write specifications and prove correctness of realistic algorithms, assuring strong foundation for complex computing operations, he helped to move verification from an academic discipline to practical tool.”

Lamport’s practical and widely used algorithms and tools have applications in security, cloud computing, embedded systems and database systems as well as mission-critical computer systems that rely on secure information sharing and interoperability to prevent failure. His notions of safety, where nothing bad happens, and liveness, where something good happens, contribute to the reliability and robustness of software and hardware engineering design. His solutions for Byzantine Fault Tolerance contribute to failure prevention in a system component that behaves erroneously when interacting with other components. His creation of temporal logic language (TLA+) helps to write precise, sound specifications. He also developed LaTeX, a document preparation system that is the de facto standard for technical publishing in computer science and other fields.

The citation honoring Lamport highlights many of the key concepts of distributed and concurrent computing that he originated, including "causality and logical clocks, replicated state machines, and sequential consistency." The citation also notes that, "along with others, he invented the notion of Byzantine failure and algorithms for reaching agreement despite such failures. He contributed to the development and understanding of proof methods for concurrent systems, notably by introducing the notions of safety and liveness as the proper generalizations of partial correctness and termination to the concurrent setting."

Background

Leslie Lamport received the IEEE Emanuel R. Piore Award for his contributions to the theory and practice of concurrent programming and fault-tolerant computing. He was also awarded the Edsger W. Dijkstra Prize in Distributed Computing for his paper “Reaching Agreement in the Presence of Faults.” He won the IEEE John von Neumann Medal and was also elected to the U.S. National Academy of Engineering and the U.S. National Academy of Sciences.

Prior to his current position, his career included extended tenures at SRI International and Digital Equipment Corporation (later Compaq Corporation). The author or co-author of nearly 150 publications on concurrent and distributed computing and their applications, he holds a B.S. degree in mathematics from Massachusetts Institute of Technology as well as M.S. and Ph.D. degrees in mathematics from Brandeis University.

ACM will present the 2013 A.M. Turing Award at its annual Awards Banquet on June 21 in San Francisco, CA.

About ACM

ACM, the Association for Computing Machinery is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

PowerTip: Convert Output to Use Specific Delimiter

$
0
0

Summary: Use Windows PowerShell to convert output to a specific delimiter.

Hey, Scripting Guy! Question How can I use Windows PowerShell to select process name and paged system memory, plus separate the output with a colon?

Hey, Scripting Guy! Answer Use the Get-Process cmdlet to retrieve the process information, select the name,
          add the PagedSystemMemorySize property, pipe the output to the ConvertTo-CSV cmdlet,
          and specify the delimiter as “:”:

Get-Process | select name, PagedSystemMemorySize | ConvertTo-Csv -Delimiter ":"


Configuring WSUS 6.x for Network Load Balancing (NLB)

$
0
0

Some content in this section was written by Marta Barillas, a SDET on the WSUS engineering team.

This blog post applies to Windows Server 2012 and Windows Server 2012 R2.

  • For instructions using WSUS 3.x with NLB, please see this TechNet article: http://technet.microsoft.com/library/dd939896(v=ws.10).aspx
  • NLB is not supported on WSUS 2.x or earlier versions. WSUS 2.x is out of support and if you are still using it, you should upgrade to WSUS 3.2 which is available free of charge from Microsoft.

 

Requirements for NLB in WSUS 6.x

The requirements to run WSUS in a NLB cluster include:

  • All nodes in the NLB cluster should be running the same version of WSUS and the same version of Windows, and should have the same Windows Updates installed. Prior to Windows Server 2012, the same WSUS-specific patches must also be installed across all servers in the cluster.
  • SQL DB should be shared across all WSUS from the same NLB (WID is not supported for NLB, and the SQL DB need not be clustered -- though it may be clustered) *
  • Content directory should be shared across all WSUS in that NLB cluster (see "Configuring Content Sharing") below*

* If you are not running WSUS in a NLB configuration, then the WSUS servers must not share a database or content directory.

Prior to Windows Server 2012, WSUS 3.2 requires a special set up command line as described in the Network Load Balancing topic in WSUS 3.x documentation. Please refer to that documentation (in the in-box HTML help/CHM file) if you are using WSUS 3.2.

Additionally, all requirements for NLB also apply above and beyond the requirements discussed above.

 

Sample test configuration

  • WSUS 6.3 --- Windows Server 2012 R2 (2 units)
  • SQL Server --- SQL Server 2012 SP1 (1 unit)
  • WSUS Client ---- Windows 7 SP1 (1 unit)

 

Step 1. Install WSUS

The steps to install WSUS are the same for NLB and non-NLB scenarios. You can install WSUS using PowerShell or Server Manager.

Note: When you use PowerShell to install WSUS 6.x, you must run post-installation tasks from the command line.

Option1: Install WSUS for NLB using PowerShell (recommended)

  1. Run this PowerShell command to install WSUS and the RSAT management tools:

Install-WindowsFeature updateservices-services,updateservices-db,updateservices-rsat

Note: updateservices-rsat is Optional; it will install the WSUS MMC console and cannot be installed when installing WSUS on a Server Core installation.

 

  1. Once you have installed WSUS from the command line you need to run postinstall from the command line.

& 'C:\Program Files\Update Services\Tools\WsusUtil.exe' postinstall SQL_INSTANCE_NAME=<Name> CONTENT_DIR=<Path>

Note:

  • SQL_INSTANCE_NAME is the name of the SQL Server & CONTENT_DIR is the path to the directory where downloaded update files will be stored. CONTENT_DIR should be a UNC path, as mapped network drives are NOT supported. For example, for example \\server1\share1\contentdir would be valid. Z:\contentdir would NOT be valid.
  • For simplicity in testing, you can use an account that has administrator privileges on the SQL server, and you can also use the default instance. You don't need to specify a named instance.
  • This step should be run in serial (not in parallel) across all WSUSs in the NLB.
  • All WSUS servers in the NLB group must use the same content directory and the same SQL database.

Option 2: Install WSUS for NLB using Server Manager

Alternatively, you can install WSUS using the Server Manager GUI.

  1. Launch Server Manager
  2. Select “Add roles and features”
  3. Click Next until reaching “Server Selection” tab, and select the server name to perform installation.

Note: local server is selected by default.

  1. Click Next to “Server Roles” tab, and select “Windows Server Update Services”
  1. A dialog will be displayed asking to include Features required for WSUS installation.
  2. If WSUS Console should not be installed, uncheck “Include management tools” option on the dialog box.
  3. Click “Add Features” on the dialog box.
  • Click Next until reaching “ Role Services” tab, and:
  1. Unselect “WID Database” option
  2. Select “WSUS Services” & “Database” options
  • Click Next to “Content” tab, and type the shared ContentDir path. This should be a UNC path, as mapped network drives are NOT supported.
  • Click Next to “DB Instance” tab, and type the SQL Server machine name.
  1. Click the “Check connection” button.
  • Click Next to “Confirmation” tab
  • Click “Install” and wait for installation to complete.
  • Click on “Launch Post-Installation tasks” link displayed after installation is completed.

Note: This step must be run in serial (not in parallel) across all WSUSs in the NLB.

 

Step 2. Configure Content Sharing

WSUS Content Sharing is required when using a Shared Database. Documentation for creating a shared file location can be found at: http://technet.microsoft.com/library/dd939896(v=ws.10).aspx. Relevant portions of that article are included here:

Create a shared file location

You should create a single shared file location that is available to all of the front-end WSUS servers. You can use a standard network file share and provide redundancy by storing updates on a RAID controller, or you can use a Distributed File System (DFS) share. The domain machine account of each front-end WSUS server must have Change permissions on the root folder of the file share. That is, if there is a WSUS server installed locally on the computer that has the DFS share, the Network Service account should have change permissions on the root folder. In addition, the user account of the administrator who will run WsusUtil.exemovecontent should have Change permissions.

After you install a WSUS update, check the NTFS file system permissions for the WSUSContent folder. The NTFS file system permissions for the WSUSContent folder may be reset to the default values by the installer.

It is not necessary to use a DFS share with an NLB cluster. You can use a standard network share, and you can ensure redundancy by storing updates on a RAID controller.

For Windows Server 2012 (WSUS 6.2), The Scripting Guy wrote about the command line and GUI steps to be used to install a Front-End WSUS Server: http://blogs.technet.com/b/heyscriptingguy/archive/2013/04/15/installing-wsus-on-windows-server-2012.aspx

 

Step 3. Install/Configure NLB

The actual configuration of NLB is detailed on TechNet here: http://technet.microsoft.com/en-us/library/cc754833(v=WS.10).aspx

In our own NLB test environment, we have the following settings set to ON:

  • Single affinity
  • Unicast
  • “Enable spoofing MAC Address” ON (for the NIC in Hyper-V, if you are using a VM)

 

Step 4. Check that things are working

4.1. Test that the master server can switchover in the event of downtime

Run the following command to ensure that multiple servers are listed:

  • Wsusutil listfrontendservers

Shut down the master server. Then run the command again (on a different WSUS machine) and verify that the master server has been switched.

4.2. Test the WSUS client connection

On the WSUS server, assuming that you are using the default port (8530), run the following command

  • netstat -nao | find "8530"

Verify that clients are able to connect. On a client machine which is configured to use the WSUS NLB cluster, run the following command:

  • wuauclt /resetauthorization /detectnow

Upgrade/Patch Considerations

Because of sharing same DB, patching can be tricky because only WSUS machines with the same version must be sharing DBs, as the DB schema could be changing as part of the patching.

If you are running WSUS in a NLB configuration, then you must upgrade all WSUS servers together. To do this, disconnect each server from the database & upgrade, then once all servers are disconnected from the database & content directory, you can start re-connecting WS2012 servers to the database & content directory. For one NLB cluster sharing a single database, you could follow these steps:

  1. Backup the database.
  2. Remove all WSUS machines from the NLB.
  3. Stop the IIS services in all machines: ‘Net stop w3svc’
  4. Stop the wsus services in all machines: ‘Net stop wsusservices’
  5. Perform patching on all WSUS
  6. From 1 of the WSUS machine, run postinstall. This will update the database, so it is not needed to be rerun from other servers.
  7. Start the wsus services in all machines (if needed): ‘Net start wsusservices’
  8. Start the IIS services in all machines (if needed): ‘Net start w3svc’
  9. Enable WSUS machines on NLB.

Mild-mannered reporter, or Superman?

$
0
0

This week I had a bit of a respite from Office 365 migrations, but it was time to put together a necessary report for the upcoming migrations. Will people think you have super-powers if you can produce a report on your Office 365 environment? Well, maybe not, but at least you can impress someone with a nice report and CSV file on all your Office 365 mailboxes. Project managers love that stuff!

In addition to having the text of the report below, I've also attached it to this post to make downloading it easier. Try this in your lab, of course, but I tried to keep this fairly generic. It shouldn't need much modification since I'm reading your accepted domains right from the tenant.

The formatting of this doesn't come through as well on a blog as I'd like. Loading the file into the Powershell ISE should fix the indenting.

#mailboxcounts-cloud.ps1 = Gets current O365 mailbox counts. grb 2014-03-20

#We'll need the MSOnline commands, and connect to the service.

Import-ModuleMSOnline

#Get password value and make it into a securestring - don't ever use a clear-text password in a script! http://powertoe.wordpress.com/2011/06/05/storing-passwords-to-disk-in-powershell-with-machine-based-encryption/

$O365login='emailaddress'

$Livecred=New-ObjectSystem.Management.Automation.PsCredential$O365login,(Get-ContentC:\Scripts\password.txt|ConvertTo-SecureString)

Connect-MsolService-Credential$LiveCred

$Session=New-PSSession-ConfigurationNameMicrosoft.Exchange-ConnectionUrihttps://ps.outlook.com/powershell/-Credential$LiveCred-AuthenticationBasic-AllowRedirection

Import-PSSession$Session

#Some basics. Get & format the date. Define where we are going to write the log file.

$Date=Get-Date-Format'yyyyMMdd'

$csvPath='C:\Scripts\logs\cloudmxbcounts-'+$Date+'.csv'

$Path='C:\Scripts\logs\mailbox-counts-'+$Date+'.log'

#This is going to be an array, so let's define it as such. We'll need this to output the CSV file.

$mailboxcounts= @()

# *Warning* This could get very large! As much as I really dislike loading the entire result into a variable, everything else I tried used about the same RAM. Figure about 1.5 GB of RAM per 10,000 mailboxes. YMMV.

$totalmbx=Get-Mailbox-ResultSize'unlimited'

#Let's get some counts up-front for the header of the report. (Management summary!) Total mailboxes and licensed.

$totalmbxcount= ($totalmbx|Measure-Object).Count

$totallicensedmbxcount= ($totalmbx|? {$_.SKUAssigned -eq'True'} |Measure-Object).Count

#Make the title text

$reporttitle="Office 365 Mailbox Counts - Total: "+$totalmbxcount+", Licensed: "+$totallicensedmbxcount

#Continue report - This is appended to an existing text file to make a larger report. This can be used separately.

echo"**************************************************"|Out-File-Append-FilePath$Path-NoClobber

echo$reporttitle|Out-File-Append-FilePath$Path-NoClobber

echo (get-Date) |Out-File-Append-FilePath$Path-NoClobber

 

$domains=Get-AcceptedDomain|Sort-Object-PropertyDomainName|SelectDomainName

Foreach ($domainin$domains) {

$domainsearch="*@"+$domain.DomainName

$mailboxes=$totalmbx|? {$_.WindowsEmailAddress -like$domainsearch}

#If a domain has no mailboxes, just skip it.

If ($mailboxes-ne$null) {

#Gather all the stats for the different types of mailboxes. This can take a while with lots of mailboxes.

$mbxuser= ($totalmbx|? {$_.WindowsEmailAddress -like$domainsearch} |? {$_.RecipientTypeDetails -eq'UserMailbox'} |Measure-Object).Count

$mbxshared= ($totalmbx|? {$_.WindowsEmailAddress -like$domainsearch} |? {$_.RecipientTypeDetails -eq'SharedMailbox'} |Measure-Object).Count

$mbxroom= ($totalmbx|? {$_.WindowsEmailAddress -like$domainsearch} |? {$_.RecipientTypeDetails -eq'RoomMailbox'} |Measure-Object).Count

$mbxequipment= ($totalmbx|? {$_.WindowsEmailAddress -like$domainsearch} |? {$_.RecipientTypeDetails -eq'EquipmentMailbox'} |Measure-Object).Count

$mbxtotal=$mbxuser+$mbxshared+$mbxroom+$mbxequipment

$mbxsku= ($totalmbx|? {$_.WindowsEmailAddress -like$domainsearch} |? {$_.SKUAssigned -eq'True'} |Measure-Object).Count

#Let's add a separarator to the report and another line for the domain & stats.

echo"---------------"|Out-File-Append-FilePath$Path-NoClobber

echo ("Domain: "+$domain.DomainName+" - User Mbxs: "+$mbxuser+", Shared Mbxs: "+$mbxshared+", Rm Mbxs: "+$mbxroom+", Eq Mbxs: "+$mbxequipment+", Licensed Mbxs: "+$mbxsku+", Total Mbxs: "+$mbxtotal) |Out-File-Append-FilePath$Path-NoClobber

#Let's start gathering data for the CSV file.

$domaincount=New-ObjectSystem.Object

$domaincount|Add-Member-typeNoteProperty-nameDomainName-value$domain.DomainName

$domaincount|Add-Member-typeNoteProperty-nameUserMbx-value$mbxuser

$domaincount|Add-Member-typeNoteProperty-nameSharedMbx-value$mbxshared

$domaincount|Add-Member-typeNoteProperty-nameRoomMbx-value$mbxroom

$domaincount|Add-Member-typeNoteProperty-nameEquipMbx-value$mbxequipment

$domaincount|Add-Member-typeNoteProperty-nameLicensedMbx-value$mbxsku

$domaincount|Add-Member-typeNoteProperty-nameTotalMbx-value$mbxtotal

$mailboxcounts+=$domaincount

}

}

echo"---------------"|Out-File-Append-FilePath$Path-NoClobber

echo" "|Out-File-Append-FilePath$Path-NoClobber

echo"End of report"|Out-File-Append-FilePath$Path-NoClobber

#Create the CSV file. Make your project manager happy.

$mailboxcounts|Export-Csv-Path$csvPath-NoTypeInformation

#Make a copy of the log and CSV that we will email using the batch file that's calling this script (or, put your own SMTP commands in).

copy$Path-Destination'C:\Scripts\logs\mailbox-counts.log'-Confirm:$false-Force

copy$csvPath-Destination'C:\Scripts\logs\cloudmxbcounts.csv'-Confirm:$false-Force

Webinar: Deliver Higher Quality Software with Automated Coded UI Testing Mar 27

$
0
0
 

Deliver Higher Quality Software with Automated Coded UI Testing

Mar 27

Considerations for multiple WSUS instances sharing a content database when using System Center Configuration Manager, but without Network Load Balancing (NLB)

$
0
0

When you use System Center Configuration Manager (SCCM) to manage updates, SCCM causes clients not to use WSUS servers directly for all operations (such as reporting). In this configuration, it is possible for multiple WSUS instances as part of SCCM to share the same database but not be configured as a NLB cluster. Technet states:

When you install more than one software update point at a primary site, use the same WSUS database for each software update point in the same Active Directory forest. If you share the same database, it significantly mitigates, but does not completely eliminate the client and the network performance impact that you might experience when clients switch to a new software update point. A delta scan still occurs when a client switches to a new software update point that shares a database with the old software update point, but the scan is much smaller than it would be if the WSUS server had its own database.

To set up such a configuration, you would install multiple SCCM SUPs with WSUS on shared database, configure WSUS/SUP to store content on a file share, but stopping short of enabling NLB.

  1. Install SQL
  2. Install first WSUS server creating database
  3. Install other WSUS servers creating database
  4. Create share for content (computer accounts must have change permission)
  5. On first WSUS server Use WSUSutil movecontent to change location
  6. On each WSUS server, in IIS ensure the “Content” virtual directory path is set to the share, and specify an account to use to connect to the path (to cater for anonymous access).
  7. Add SUP (Software Update Point) role to first server and synch.
  8. Add SUP role to other servers.

Rather than connecting to WSUS directly, clients are choosing a random SUP to connect to (which is their expected behavior, and why the NLB is not needed on the server side). This is a supported configuration of WSUS, but only when WSUS is being used in a SCCM deployment.

Top Contributors Awards! Form Based Authentication in SharePoint 2013, Braille in .Net, Wiki tips en français, BizTalk & so much more...

$
0
0

Welcome back for another analysis of contributions to TechNet Wiki over the last week.

First up, the weekly leader board snapshot...

 

Peter Geelen romps to the top again, or rather he continues his consistent march through March, reliability is his middle name.

Also a good performance from Mohammad. Yelenah still topping the monthly newies.

 

As always, here are the results of another weekly crawl over the updated articles feed.

 

Ninja AwardMost Revisions Award  
Who has made the most individual revisions

 

#1 Mohammad Nizamuddin with 91 revisions. Crawler shows Mohammad beats Peter on sheer revisions, but that's not the whole story.

  

#2 Peter Geelen - MSFT with 72 revisions.

  

#3 Adn-Studio95 with 27 revisions.

  

Just behind the winners but also worth a mention are:

 

#4 Idan Vexler with 20 revisions.

  

#5 Monimoy Sanyal with 18 revisions.

  

#6 Ersin CAN - TAT with 16 revisions.

  

#7 Ed Price - MSFT with 15 revisions.

  

#8 Mesut Yilmaz - TAT with 15 revisions.

  

#9 Durval Ramos with 15 revisions.

  

#10 Davut EREN - TAT with 14 revisions.

  

 

Ninja AwardMost Articles Updated Award  
Who has updated the most articles

 

#1 Mohammad Nizamuddin with 83 articles. Mohammad however is clearly more prolific this week on sheer article count, congrats!

  

#2 Peter Geelen - MSFT with 46 articles.

  

#3 Carsten Siemens with 12 articles.

  

Just behind the winners but also worth a mention are:

 

#4 Durval Ramos with 10 articles.

  

#5 Naomi N with 9 articles.

  

#6 Yavuz Tasci -TAT with 9 articles.

  

#7 Davut EREN - TAT with 8 articles.

  

#8 Ersin CAN - TAT with 7 articles.

  

#9 Ed Price - MSFT with 6 articles.

  

#10 Nonki Takahashi with 5 articles.

  

 

Ninja AwardMost Updated Article Award  
Largest amount of updated content in a single article

 

The article to have the most change this week was Form based Authentication ( FBA ) in SharePoint 2013, by Sugumaran Srinuvasan

This week's revisers were Sugumaran Srinuvasan& Peter Geelen - MSFT.

This great guide will help any beginner hop through the steps of FBA configuration. This is copied in from Sugumaran's own blog, and is gratefully received.

You should put a reference to your blog though, or it may get mistaken as plagiarism by our cursory auto-checks.

 

 

Ninja AwardLongest Article Award  
Biggest article updated this week

 

This week's largest document to get some attention is Braille Code in .Net, by Paul Ishak

This week's reviser was Mohammad Nizamuddin.

Love this article. Winner of TechNet Guru January '14. This is just the kind of thing that catches the imagination and inspires many.

 

 

Ninja AwardMost Revised Article Award  
Article with the most revisions in a week

 

This week's most fiddled with article is Comment obtenir un bon article Wiki , by Ed Price - MSFT. It was revised 26 times last week.

This week's reviser was Adn-Studio95.

This article explains how best to write a TechNet Wiki article. Originally machine translated by Ed, others like Adn have been buffing it to perfection!

 

Ninja AwardMost Popular Article Award  
Collaboration is the name of the game!

 

The article to be updated by the most people this week is TechNet Guru Contributions for March, by XAML guy

March is filling up nicely, but many more chances to get a win in this week, if you can spare the time?

This week's revisers were Idan Vexler, Mikel x Mikel, Nonki Takahashi, boatseller, Ravindar Thati, Tomasso Groenendijk, GirirajSingh, Jesper Arnecke, Rahber, Ed Price - MSFT, Shanky_621, Sharjeel (MSP)& mcosmin.

 

The article to be updated by the third most people this week is BizTalk: Detecting a Missing Message, by boatseller

This week's revisers were boatseller, Suleiman Shakhtour, Tomasso Groenendijk& Peter Geelen - MSFT

This great new article from boatseller is truly the kind of gem we like here at TechNet Wiki. Plenty of text and images to explain his point. Thanks boatseller!

 

 

Ninja AwardNinja Edit Award  
A ninja needs lightning fast reactions!

 

Below is a list of this week's fastest ninja edits. That's an edit to an article after another person

Another productive month, with some great new content coming in. 

Hope you all have a great week ahead. Hope to see you (maybe even write about you) next week!

Best regards,
Pete Laker

 

Viewing all 17778 articles
Browse latest View live