Quantcast
Channel: TechNet Blogs
Viewing all 17778 articles
Browse latest View live

Weekend Scripter: Unexpected Case Sensitivity in PowerShell

$
0
0

Summary: PowerShell MVP, Mike F Robbins, discusses case sensitivity in Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome back guest blogger, Mike Robbins.

Photo of Mike Robbins

Mike F Robbins is a Microsoft MVP for Windows PowerShell and a SAPIEN Technologies MVP. He is a co-author of Windows PowerShell TFM 4th Edition, and is a contributing author of a chapter in the PowerShell Deep Dives book. Mike has written guest blog posts for the Hey, Scripting Guy! Blog, PowerShell Magazine, and PowerShell.org. He is the winner of the advanced category in the 2013 PowerShell Scripting Games. Mike is also the leader and co-founder of the Mississippi PowerShell User Group. He blogs at mikefrobbins.com and can be found on twitter @mikefrobbins.

A few months ago I wrote a blog post called Some Cases of Unexpected Case Sensitivity in PowerShell, and I thought I would expand on that topic a bit here as a sequel to that original post.

Everything is case insensitive in PowerShell, right? Well, that’s what we’re normally taught, but it’s actually not quite that simple. The answer to whether PowerShell is case sensitive is, “It depends.” In general, PowerShell is not case sensitive, but there are a number of caveats to case sensitivity, some of which are intentional, and some that are unexpected.

Operators

By default, the comparison operators that you’ll commonly see used with PowerShell are case insensitive. These include:

  • -eq
  • -ne
  • -gt
  • -ge
  • -lt
  • -le
  • -like
  • -notlike
  • -match
  • -notmatch
  • -contains
  • -notcontains
  • -in
  • -notin
  • -replace

Each of these comparison operators have a corresponding case-sensitive version:

  • -ceq
  • -cne
  • -cgt
  • -cge
  • -clt
  • -cle
  • -clike
  • -cnotlike
  • -cmatch
  • -cnotmatch
  • -ccontains
  • -cnotcontains
  • -cin
  • -cnotin
  • -creplace

Each of these operators also has a case-insensitive version that begins with an i instead of a c. These versions are explicitly case insensitive, although you’ll rarely (if ever) see them used because their behavior is the same as where i or c is not specified.

Here is an example of using these operators:

PS C:\> 'PowerShell' -eq 'powershell'

True

PS C:\> 'PowerShell' -ceq 'powershell'

False

PS C:\> 'PowerShell' -ieq 'powershell'

True

PS C:\>

To learn more about comparison operators, see the about_Comparison_Operators Help topic in PowerShell or view the online version: about_Comparison_Operators.

Escape characters

All of the alphabet-based escape characters are case sensitive:

  • `a    Alert
  • `b    Backspace
  • `f    Form feed
  • `n    New line
  • `r    Carriage return
  • `t    Horizontal tab
  • `v    Vertical tab

The special meaning of these characters doesn’t occur when an upper-case character is specified, for example:

PS C:\> "Hey, Scripting `n Guy! Blog"

Hey, Scripting

 Guy! Blog

PS C:\> "Hey, Scripting `N Guy! Blog"

Hey, Scripting N Guy! Blog

PS C:\>

To learn more about escape characters, see the about_Escape_Characters Help topic in PowerShell or view the online version: about_Escape_Characters.

Region and EndRegion tags

Regions were introduced in Windows PowerShell 3.0. Specifying the #region or #endregion tags in anything other than lower-case letters breaks the ability to collapse that portion of the code.

External technologies

When you’re using PowerShell to access information from a technology that’s outside of PowerShell, such as from Active Directory with ADSI, the rules for case sensitivity in PowerShell no longer apply. An example is shown in the following function. If the case for samaccountname is changed inside of pscustomobject, the samaccountname will not be returned.

#Requires -Version 3.0

function Get-MrADUser {   

    [CmdletBinding()]

    param(

        [Parameter(Mandatory,

                   ValueFromPipeline,

                   ValueFromPipelineByPropertyName)]

        [String[]]$UserName

    )   

    PROCESS {       

        foreach ($user in $UserName){           

            $Search = [adsisearcher]"(&(objectCategory=person)(objectClass=user)(samaccountname=$user))"

            foreach ($user in $($Search.FindAll())){               

                $stringSID = (New-Object -TypeName System.Security.Principal.SecurityIdentifier($($user.Properties.objectsid),0)).Value

                $objectGUID = [System.Guid]$($user.Properties.objectguid)

                [pscustomobject]@{

                    DistinguishedName = $($user.Properties.distinguishedname)

                    Enabled = (-not($($user.GetDirectoryEntry().InvokeGet('AccountDisabled'))))

                    GivenName = $($user.Properties.givenname)

                    Name = $($user.Properties.name)

                    ObjectClass = $($user.Properties.objectclass)[-1]

                    ObjectGUID = $objectGUID

                    SamAccountName = $($user.Properties.samaccountname)

                    SID = $stringSID

                    Surname = $($user.Properties.sn)

                    UserPrincipalName = $($user.Properties.userprincipalname)

                }

            }

        }

    }

}

Cmdlets

Some cmdlets have parameters to control case sensitivity. One example is the Sort-Object cmdlet, which has a CaseSensitive parameter. Specifying this parameter when using Sort-Object indicates that the sort should be case sensitive. By default, sorting with Sort-Object is case insensitive. Here are some examples:

PS C:\> 'PowerShell', 'powershell', 'POWERSHELL' | Sort-Object

POWERSHELL

powershell

PowerShell

PS C:\> 'PowerShell', 'powershell', 'POWERSHELL' | Sort-Object -CaseSensitive

powershell

PowerShell

POWERSHELL

PS C:\>

Switch statements

Case-insensitive switch statements are not supported in PowerShell workflows. For more detailed information about this topic and to learn how to work around this, see the following Hey, Scripting Guy! Blog post: PowerShell Workflows: Restrictions.

Hash tables

Hash tables are case insensitive by default. Notice that I can’t add ComputerName and computername to the hash table in the following example because, by default, the keys are the same:

PS C:\> $hashtable = @{}

PS C:\> $hashtable.Add('ComputerName', 'PC01')

PS C:\> $hashtable.Add('computername', 'pc01')

Exception calling "Add" with "2" argument(s): "Item has already been added. Key in dictionary: 'ComputerName'  Key

being added: 'computername'"

At line:1 char:1

+ $hashtable.Add('computername', 'pc01')

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException

    + FullyQualifiedErrorId : ArgumentException

PS C:\>

A special case-sensitive hash table can be created though. Notice in the following example, I can add the two keys that failed in the previous example:

PS C:\> $hashtable = New-Object -TypeName System.Collections.Hashtable

PS C:\> $hashtable.Add('ComputerName', 'PC01')

PS C:\> $hashtable.Add('computername', 'pc01')

PS C:\> $hashtable

Name                           Value

----                                 -----

ComputerName                 PC01

computername                   pc01

PS C:\>

JSON

JSON is case sensitive, but the custom objects that are created when using the ConvertFrom-Json cmdlet are case insensitive. For more detailed information abouit JSON, see this Hey, Scripting Guy! Blog post: JSON Is the New XML.

Methods

As you can see in the following example, methods such as contains and replace are case sensitive, whereas the contains and replace operators are not case sensitive.

PS C:\> $Features = 'SQLENGINE', 'SSMS', 'ADV_SSMS'

PS C:\> $Features.Contains('SQLENGINE')

True

PS C:\> $Features -contains 'SQLENGINE'

True

PS C:\> $Features.Contains('SQLEngine')

False

PS C:\> $Features -contains 'SQLEngine'

True

PS C:\>

Do you think that you’ll outsmart the case-sensitivity issues in PowerShell by using the ToUpper or ToLower method to convert strings to a specific case before performing comparison operations on them? Not so fast. You could be creating a different problem—especially if you’re potentially working with other cultures. If you’re thinking about going this route, consider reading my blog post Using Pester to Test PowerShell Code in Other Cultures.

~Mike

Thanks for clarifying a somewhat confusing issue for us, Mike.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


ライセンス契約「MPSA」の概要ビデオがご覧いただけます【1/10 更新】

$
0
0

約 1 年前より提供開始されている比較的新しいライセンス形態である、マイクロソフト製品/サービス契約(MPSA)をご紹介いたします。MPSA に加入すれば、製品の購入が簡単になり、ライセンス管理業務にかかる時間を節約できる­ほか、必要な全てのプロダクトを最適な価格でご購入いただけます。

 

MPSAの概要
(Please visit the site to view this video)

 

マイクロソフト製品/サービス契約(MPSA)の購入アカウントをご紹介いたします。­MPSA に “購入アカウント”を導入することにより、1つの契約のもとでソフトウェアやオンライ­ンサービスの購入や管理ができるようになりました。

 

購入アカウントに関するビデオ
(Please visit the site to view this video)

▼ MPSA の詳細はこちらをご覧ください

Older Versions Of Internet Explorer Reach End Of Support January 12, 2016 - Part 2

$
0
0

In Part 1 I highlighted some of the resources that focus on End Of Support for older versions of Internet Explorer, in this post I'll focus on some of the compatibility and migration resources that are available.

Tips and tricks to manage Internet Explorer compatibility

This article focuses on how to leverage the Enterprise Mode Site List for better backwards compatibility with legacy web apps that may have kept earlier versions of IE deployed. The general guidance is of course to upgrade to Internet Explorer 11 on versions of Windows that on the desktop and server that support it, but the article gives some advice on how to approach things depending on whether you are currently on IE 8, 9 or 10.

What is Enterprise Mode?

Enterprise Mode is a compatibility mode that runs on Internet Explorer 11 on Windows 8.1 Update and Windows 7 devices, lets websites render using a modified browser configuration that’s designed to emulate either Windows Internet Explorer 7 or Windows Internet Explorer 8, avoiding the common compatibility problems associated with web apps written and tested on older versions of Internet Explorer.

Turn on Enterprise Mode and use a site list

Before you can use a site list with Enterprise Mode, you need to turn the functionality on and set up the system for centralized control. By allowing centralized control, you can create one global list of websites that render using Enterprise Mode. Approximately 65 seconds after Internet Explorer 11 starts, it looks for a properly formatted site list. If a new site list if found, with a different version number than the active list, IE11 loads and uses the newer version. After the initial check, IE11 won’t look for an updated list again until you restart the browser.

Enterprise Mode Site List Manager for Windows 7 and Windows 8.1

This tool lets IT Professionals create and update the Enterprise Mode Site List in the version 1.0 (v.1) XML schema. The v.1 Enterprise Mode XML schema is supported on Windows 7, Windows 8.1, and Windows 10.

Internet Explorer 11 (IE11) - Deployment Guide for IT Pros

Use this guide to learn about the several options and processes you'll need to consider while you're planning for, deploying, and customizing Internet Explorer 11 for your customer's and employee's computers.

Internet Explorer TechCenter

This page has five main categories for resources related to Explore, Plan, Deploy, Manage and Support stages of IE 11 assessment and deployment.

Web Application Compatibility Lab Kit

This lab includes two options, a lite version, weighing in at 180MB, or the full version, weighing in at 21GB. The full version incluces the necessary virtual machines to run through the labs, whereas the lite version requires you to provide your own WIndows 7 and Windows 10 clients.

iSCSI or SMB Direct, Which one is better?

$
0
0

Since Windows Server 2012, we support placing Hyper-V VM and SQL Server's database in a file share (SMB Share). That provides a cost effective storage solution. Meanwhile, Windows Server also has built-in iSCSI Target and imitator. Then the question come up. Which one is better, iSCSI or SMB?

Like consultant's standard answer, "It depends..." In the following circumstances, you may want to use iSCSI.

I need shared SAN block storage because…

  • It will be the shared storage for my new failover cluster
  • My application delivers best performance with block storage
  • I have an application not supported on File Servers
  • I am most familiar with SAN block storage
  • I want to boot from SAN storage

I prefer iSCSI SANs because…

  • Unconstrained by <# of ports, distance…> of a shared SAS JBOD
  • Fibre Channel is too expensive – expensive HBAs, expensive switches, expensive FC expertise…
  • I don’t need special NICs or switches to build an iSCSI SAN

Then what's the benefit for using SMB?

  • Easy to use. No need to worry about target, initiator, LUN provision, etc.
  • Support RDMA, which provide low latency and more consistent performance.
  • Support Multi-channel. Aggregate network bandwidth and provide fault tolerance if multiple paths are available between client and server. No need to configure MPIO or NIC Teaming. It's all automatically in most cases.
  • Provide Server side caching, which is Microsoft iSCSI Target doesn't support.
  • Durable handles. Transparently reconnect to server during temporary disconnection. Scale-out File server cluster can provide continuous availability over the traditional high availability.
  • Be able to scale out. Microsoft iSCSI Target doesn't support scale out across multiple nodes.

You may already be able to make the decision. If not, I guess the next question would be how about the performance? Sounds iSCSI can provide better performance. It's block storage. Is that true?

In order to compare the performance of iSCSI and SMB. I don't want to hardware resource (say, network or storage) become the bottleneck. So I prepare the lab environment as below.

WOSS-H2-14 (Role: iSCSI Target and SMB File Share)

  • DELL 730xd
  • CPU: E5-2630v3 x2
  • Memory: DDR4 128GB
  • Storage: Micron P420m 1.4TB
  • Network: Mellanox Connect-3 56Gb/s Dual port
  • OS: Windows Server 2012 R2

WOSS-H2-16 (Role: iSCSI Initiator and Hyper-V host)

  • DELL 730xd
  • CPU: E5-2630v3 x2
  • Memory: DDR4 144GB
  • Network: Mellanox Connect-3 56Gb/s Dual port
  • OS: Windows Server 2012 R2

Benchmark:

------------------------------

Before compare the iSCSI and SMB, let's first take a look the benchmark of the network and storage to make sure they could not be our bottleneck.

The above test environment can support the network throughput up to 3,459MB/s

Now let's look at the storage, you could see the drive can provide 3,477MB/s throughput and 786,241 IOPS 100% random raw disk read.

Test case 1: Performance of iSCSI Disk

----------------------

I created a 100GB iSCSI volume on WOSS-H2-14 and configure the iSCSI initiator on WOSS-H2-16 to connect that target and volume.

Then I run Crystal Disk Mark and FIO against that volume. As you could see from the screenshot below the read throughput is only 1,656MB/s, which is far below our network and storage's throughput. However the write throughput is same as our storage's benchmark. Micron P420m is not designed for write intensive application. It's write performance become bottleneck in this case. So let's more focus on compare the read performance between iSCSI and SMB.

The FIO output could tell us the 4KB random IO is only 24,000 IOPS. It's also far below the drive's capability.

Test case 2: Performance of VM running in an iSCSI volume

----------------------

On WOSS-H2-16 I create a VM and attach a 50GB fixed data VHD to that VM. That data VHD is in the above iSCSI volume. Here is the result.

I also captured the performance log when I run FIO. The FIO test includes two parts. The first part 5 minutes is 100% 4KB random read. The second part is 60 minutes 100% 4KB random write. From the picture below, we can see both read and write 4KB random are capped at 230,000.

Test case 3: Performance of VM running in a SMB Share

----------------------

Move the above data VHD from iSCSI volume to a SMB share on WOSS-H2-14. Actually both the iSCSI volume and SMB share are on the same drive. Here is the test result for this case.

Summery:

---------------------

Compare the test results from the above test cases, we can find SMB can provide the better throughput, much higher IOPS and extremely lower latency.

PowerTip: Use PowerShell to Perform Case-Sensitive Comparison

$
0
0

Summary: Learn how to perform a case-sensitive comparison in Windows PowerShell.

Hey, Scripting Guy! Question I need to compare two strings while taking case sensitivity into account. I try using -eq, but it does not work.
            How can I use Windows PowerShell to perform a case-sensitive comparison?

Hey, Scripting Guy! Answer Use the -ceq operator instead of -eq. Here are two examples that compare the results of -eq and -ceq:

PS C:\> 'scripting guys' -eq 'Scripting Guys'

True

PS C:\> 'scripting guys' -ceq 'Scripting Guys'

False

PS C:\>

MIM-CM 2016 + Virtual Smart Card Modern App Part I

$
0
0

Hello All and Happy New Year!  I hope everyone is recharged from a most excellent holiday season!  I'm back with another security focused topic, virtual smart cards and also taking a look at MIM CM 2016 to life cycle the credential.  The threat landscape today is more fluid and dynamic then ever before and organizations are looking at new security technologies to better protect themselves and their assets.  Two factor authentication has been around for some time in the form of smart cards, rsa tokens, etc.  It's widely considered to be a stronger and more secure form of authentication than traditional user name and password credentials.  In this day and age we have the advent of a virtual smart card (vsc), a smart card that is always inserted, has no visible/tangible physical footprint, and uses a machines trusted platform module (TPM) for a secure root of trust and isolated cryptography.  More and more of todays' newest security technologies (credential guard, code integrity, device guard, etc.) are using the TPM due to its hardware assurance level benefits. Today I want to share with you my experience on getting Microsoft Identity Manger 2016 installed in a lab for testing and dev purposes.  One of the new features of MIM CM 2016 is the addition of a nice Modern style app that lets you enroll, renew, and manage virtual smart cards.  As it turns out MIM CM is a little lengthy in getting it off the ground, at least that's been my experience; but once up and running there are some nifty features for virtual smart cards!  Good experience with PKI and Windows Certificate Authorities is very helpful when setting up FIM CM.  This setup assumes you have all of the servers required for MIM CM; a Domain Controller, MIM-CM server, SQL server, Windows 10586 enterprise client on physical hardware with a TPM 1.2 or later, and a CA server all running 2012 R2 Update, fully patched.  Lets get to it!

Schema Extension
MIM CM requires some additional attributes and extended rights to be added to the Schema.  Hopefully you know the drill on the Schema and all of the pre-requisites for modifying it, its a one-way ticket so make sure you have at least one good system state backup from at least one DC in every domain.  Drop your admin account in Schema Admins and logoff/login to a DC to reflect the new membership.  Review the Schema folder on the MIM 2016 media, if you're doing a single forest, single domain installation like I am you'll want to use the "ResourceForest.ldif" file and use ldifde to extend it:

This is fairly straightforward and hopefully you get a success message.  Don't forget to take yourself out of Schema Admins when you're done ; )

MIM-CM Pre-requisites
Lets just say there are a number of these.  You'll want to use the following TechNet article for FIM CM which largely still applies.  We've already gone through the schema extension, but there are also SQL and certificate template pre-reqs that need to be completed that are documented there. 

For this scenario I didn't perform the optional steps in the article.  One of the last things are the IIS pre-requisites for the MIM-CM web server, I used server manager to add all of the role services I wanted to install, then exported to an xml file via the UI so that I could add them via PowerShell and use on an additional CM webserver later:

MIM-CM Install
Now that all of the pre-requisites are complete we're ready to install MIM-CM on the designated MIM-CM server.  Launch the MIM-CM installer and execute the installer:

Next!


Select the Portal and Update Service.

This is the IIS virtual directory - I used the default configuration.  Go Next and click Install.  That should wrap up the installation on the CM web server.  Now that the MIM CM website is in IIS, we need to disable Kernel mode authentication which is documented here.  Let's move over to the Certificate Authority server and install the MIM-CM bits there, same drill here let's execute the installer and this time we'll choose the MIM CM CA files which will install the MIM CM policy module and exit module:

Next we need to allow the CA to access the MIM CM database, which is detailed here.  Ensure you complete these steps or the CA will not be present in the MIM CM SQL database.  Now that we've laid down the bits we need to run the MIM CM configuration Wizard on the server where you installed the MIM CM portal.  So head over there and on the start menu you should see the MIM CM configuration wizard, let's execute it and step through it.  Ensure the credential you're using has SA rights on the SQL server you're going to use (or optionally specify another account), and has the appropriate rights in AD to create all of the user objects.

Next.  Select the Certification Authority that MIM CM will use for certificate issuance clicking browse.

Specify the sql server name and which credential you are going to use to create the database.

Specify the database name, I accepted the default name and used SQL integrated authentication.

For the AD piece it's recommended to keep the default setting, and so it was done.

This screen is interesting and is actually new with MIM CM 2016, we can now use the power of ADFS for authorization and authentication into the MIM CM portal.  For this scenario we will use IWA/Kerberos.

If you chose to follow the aforementioned article and create all of the accounts yourself in AD, here is where you punch them into the configuration wizard.  For this scenario I used the MIM default settings and specified the OU that the accounts will be created in.

MIM CM needs a couple of certificates to function, they are issued from the certificate templates that you created and made available on the CA based on the TechNet article previously mentioned.  Choose the respective certificate templates for each entry, if you do not see them double check the permissions on the templates based on the TechNet article.

Email configuration.  I didn't have any exchange servers setup nor was I looking to specifically test any mail related functionality so I used the defaults here.

Now that we arrived at the summary, we can review our selections/configurations.  If you're ready to let her rip, click configure.

Hopefully we arrive here!

The idea is to get MIM CM installed and configured so that we can issue virtual smart cards to end users via the MIM CM portal, and the new MIM CM modern application.  MIM CM offers additional capabilities in reporting, enrollment, retirement, and other smart card functions.  We still have quite a bit of configuring to do to get this all off of the ground.  We have some CA, AD, IIS, and MIM CM portal configurations to complete.  So we'll see you at the next post.

Jesse Esquivel

Microsoft Advanced Threat Analytics Lab Setup and Demo

$
0
0

Hey Folks!  I just got my ATA lab up and running and thought I’d share a few tips and tricks for those of you doing a lab or POC type setup and want to get up and running quickly.

First of all, here’s the ‘official’ documentation for ATA.  It’s worth walking through that as I won’t really detail the setup process here since I’ve already done that and don’t have the screenshots.  The setup/install for ATA is very straightforward although I’ll provide a few tidbits here that might help you not run into the same snags I did getting this up and running.

ATA Deployment Guide

If you need the 90 day trial bits – you can grab those here: https://www.microsoft.com/en-us/evalcenter/evaluate-microsoft-advanced-threat-analytics

They are also on MSDN as well if you are a subscriber there (along with a key you can use):

image

An example ATA Topology might look like what is represented in this diagram.  But, the gist here is that you NEED to have both the ATA Center AND the gateway and they can’t be the same server.

Just for reference, here’s the ATA architecture and capacity planning page:  https://technet.microsoft.com/en-us/library/mt297476.aspx

image

In my lab I’m running everything (DC, ATA Center and ATA Gateway) all in Hyper-V VM’s.  I happen to be using the Windows Server 2016 Tech Preview 4 but most folks will, at least at the time of this writing, will use 2012 R2 which will work fine of course.  It is required to run 2012 R2 – 2012 R1 is not supported.  The key here is that you’ll need to enable port mirroring on both your DC and ATA Gateway VM’s so keep that in mind.

NOTE: As of this writing using ATA in Microsoft Azure is not supported.  So, if you have domain controllers in Azure you will not be able to configure the port mirroring (or the gateway servers for that matter) since those virtual switch capabilities are not exposed.  The product teams are working on this so stay tuned…

When you are installing the ATA Center you’ll need a VM with 2 NIC’s – easy to do with a VM of course.   One of them is to connect to the console and the other is for ATA management.  Make sure you notate the IP’s correctly when you get this setup.  Read the deployment guide as well – there’s a Windows Server hotfix that needs to be applied or you’ll have issues.

image

ATA Center Configuration

Now, on the DC go to the network settings on the VM and enable port mirroring and choose ‘source’ from the drop down.

image

On the Gateway VM you’ll connect to the ATA Center console IP address and then download the gateway installer components.  Follow the instructions per the deployment guide for installing – it’s pretty straightforward.

The thing here is – you’ll need to 2 NIC’s on the Gateway VM as well.  One for LAN and the other for CAPTURE.  I like to name the NIC’s just to keep track easily.

On the CAPTURE NIC you’ll want some ‘dummy’ information in there.  Pick an  IP/subnet that is not routable and no DNS.  Here’s what I did:

image

On the gateway server you’ll need to enable port mirroring on that CAPTURE NIC.  Make sure you pick the right one – in the Hyper-V settings I always look for the MAC address and then do an ipconfig in the VM to match up which NIC since it’s not blatantly obvious in the settings which one is which.  Make sure you choose ‘destination’ from the drop down.

image

In the ATA web console you’ll see the option to check the appropriate capture NIC:

image

You may need to check to make sure that the ATA Service is running at this point on the gateway.

image

As the deployment guide states – check the perfmon counters to make sure your gateway is installed correctly.

image

The one that will tell you whether or not the gateway is the network listener / captures messages per second:

image

When you add that counter you’ll start to see some activity there:

image

At this point you should be able to login to the console and over on the right hand side you’ll see that ATA will start picking up some information about your environment:

image

You should also be able to use the ‘search’ box to look up users/computers, etc…

image

At this point we can do a few tests to check and make sure our environment is working properly.  ATA will take 21 days to really learn your environment so you won’t see pattern behavior type activity for a period of time because ATA is in learning mode.  However, there are quite a few attacks that will show up immediately.  Here’s a few you can use to do some testing or demos:

DNS Recon:

Open a CMD window and do an NSLOOKUP against your protected DC:

nslookup –dc.domain.com

Now do this:

ls domain.com

You’ll see all your machines enumerated in the window.  Now lets go check out the ATA Console:

We can see here that I ran this a couple times.  Once at 11:33am and again for this demo at 12:28pm:

image

Let’s try something else…

Remote Execution:

Download the PsExec Tools from TechNet: https://technet.microsoft.com/en-us/sysinternals/bb896649.aspx

From a member server in the domain run the following command:

PsExec.exe \\DC01 Ipconfig

(DC01 of course represents one of your protected DC’s)

You’ll get an ipconfig from the DC.  In your ATA console you’ll see this:

image

Finally let’s log into a PC with the honey token user account:

If you followed the directions in the ATA install guide you have your honey token account setup. Typically this is the good ol’ DOMAIN\Administrator account.

In my case, I have a user called ‘admin’ and I just logged into one of my Windows 7 VM’s with that identity.  This is what shows up in the ATA console:

image

So there you go!  ATA is really a pretty easy product to install and get especially when you consider the kind of information and insight it provides.  Hopefully this helps you on your way to getting ATA setup and configured properly and if you are like me and doing demos and such this gives you a few things that you can show off in real time.

Have fun!

Older Versions Of Internet Explorer Reach End Of Support January 12, 2016 - Part 3

$
0
0

In the final post of this series I've embedded a number of videos related to the topics that have been covered in the last few posts for those of you who prefer to learn via video based material rather than reading online.

(Part 1) Windows 10 and App Compat: What about my Windows Apps?

Kevin Remde welcomes theApp Compat Guy” himself, Chris Jackson to the show, as they kick off a 3 part series on Windows 10 and App Compatibility. Tune in for part 1 where they address concerns surrounding application or scenario compatibility during the move to Windows 10.

(Part 2) Windows 10 and App Compat: How do I get to IE11?

Kevin Remde and "The App Compat Guy" Chris Jackson are back for part 2 in their Windows 10 and App Compatibility series and in today’s episode they discuss Internet Explorer and what to do about compatibility concerns for your web applications.

(Part 3) Windows 10 and App Compat: How do I get to the Edge?

In part 3 of their Windows 10 and App Compatibility series, Kevin Remde and "The App Compat Guy" Chris Jackson discuss the enigma that is the Edge web browser found in Windows 10. Why is it here, Why do we need a new browser and more importantly, will it work with my web applications?

Microsoft Edge (formerly “Project Spartan”) Overview

Windows 10 features Microsoft Edge, the first browser with “DO” in mind. It’s personal, productive, and responsive—but most important, it takes you beyond browsing to doing. Learn more about Microsoft Edge in this overview session, and attend the Windows 10 Browser Management session for a deeper dive on managing Internet Explorer 11 and Microsoft Edge.

Enterprise Web Browsing

Support for older versions of Internet Explorer expires on January 12, 2016, so upgrade to Internet Explorer 11 today to continue receiving security updates and technical support. Windows 10 also includes Internet Explorer 11, so upgrading can help ease your Windows migration. Learn about the browser roadmap, upgrade resources, Windows 10 browser options, and Microsoft’s new approaches to web app compatibility and interoperability with the modern web.

Enterprise Mode for Internet Explorer 11 Deep Dive

Enterprise Mode helps customers upgrade to Internet Explorer 11, which can ease Windows 10 migrations. This session is a deep-dive on deploying and managing Enterprise Mode, Enterprise Site Discovery, and other tools. By upgrading to the latest version of Internet Explorer, customers can stay up to date with Windows, services like Microsoft Office 365, and Windows devices.

 


Where’s Waldo (and Where’s Ed)?

$
0
0

Summary: Ed Wilson, Microsoft Scripting Guy, talks about the future of the Hey, Scripting Guy! Blog and other stuff.

Microsoft Scripting Guy, Ed Wilson, is here. If you are a regular reader of the Hey, Scripting Guy! Blog, you will no doubt have noticed that I have not been very active these past several weeks. However, it is a testament to the Windows PowerShell community—especially to Microsoft MVP, Sean Kearny and the Scripting Editor, Dia Reeves—that the blog has not missed a single day of posts. Sean did an excellent job of rounding up guest bloggers, and Dia did a tremendous job of ensuring that all of the content was edited, staged, and properly published.

And me?

Well, the other day the Scripting Wife (aka PowerShell MVP, Teresa Wilson) and I made a trip to Jacksonville, Florida. It was the first time in nearly two months that I had even been out of the house. As we were driving along, I saw a sign….

Dude, we had found Waldo.

Image of sign

What’s been going on?

So, why have I been missing? Well I had to have an ear surgery, and it took a bit long to recover. But while I have been out, we have been busy working on Windows PowerShell stuff—and also on other stuff. First, the Windows PowerShell stuff…

PowerShell Saturday, March 19, 2016

We have been really busy lining up a truly impressive list of speakers for the first ever PowerShell Saturday to be hosted in Tampa. I will be keynoting the event, along with Mark Minasi. In addition, Mark and I will be delivering individual sessions. There will be four different tracks.

We have the awesome Ashly McGlone and Jason Walker from Microsoft, in addition to MVPs Adam Driscoll, June Blender, Sean Kearny, and Jim Christopher. Nearly half of the speakers have spoken at conferences such as TechEd, TechReady, Ignite, or the PowerShell Summit.

Did I mention that it is Tampa in March? Tampa in March is beautiful, and besides that, it makes for a great long weekend (Tampa is a little more than an hour from Orlando). Sign up soon because all prior PowerShell Saturdays have sold out, and you do not want to miss out on this awesome event: PowerShell Saturday #010: Tampa, Florida - March 19th, 2016.

Microsoft Operations Management Suite

You may have seen on our Facebook site that I am now on the #MSOMS team, and that I have recently started the Operations Management Suite Blog. This does not really change things here very much because my new manager, Jeremy Winter, wants me to continue with the Hey, Scripting Guy! Blog.

This is great news for the Windows PowerShell community because the HSG blog is the most popular blog at Microsoft, and it is a great showcase for writers from the community. In fact, it was the first blog at Microsoft to make use of guest bloggers from the community. That tradition will continue—in fact, it must continue because I have also started the Operations Management Suite Blog.

What’s up with this?

It is a very natural progression because the approach to the product is much the same as that of the Windows PowerShell team. One of the things that makes PowerShell so popular and such an awesome product is that the team listens to and is engaged with the community, and they try to implement as much of the feedback as possible.

The OMS team is the same, with the added benefit of having quarterly releases for our cloud-based solution. But for a Scripting Guy it is even better because OMS does PowerShell—in some cases (such as with workflow automation and Desired State Configuration), it does PowerShell better than PowerShell does.

This is simply the natural place for me. I talk about this in the blog post that I used to kick off the #MSOMS blog: What is Microsoft Operations Management Suite and why is it cool? Check out the Operations Management Suite Blog when you have an opportunity.

One thing is for sure, the Hey, Scripting Guy! Blog will continue, and I will still be the Scripting Guy. In addition, the Scripting Wife and I will be at PowerShell Saturday in Tampa, and we are both going to be at the PowerShell Summit this year in Seattle. We are also planning a Scripting Guys booth at Ignite. I am also planning to have an OMS booth at Ignite, so we will be heavily staffing the HSG booth with community members, and the Scripting Wife and I will be floating between the two booths. It is going to be an exciting year—but then, it is always exciting.

Join me tomorrow when I will have a guest post by Adam Bertram about advanced Windows PowerShell functions.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Sunday Surprise: The TechNet Wiki and what it is NOT

$
0
0

Whit the new fresh start of 2016, I'm quickly jumping in with an extra surprise post.

But before I kick off, allow me to wish everyone Best Wishes for 2016, lots of success on the TNWiki and a productive year of Wiki articles and blogs!

As before, I'm focusing on the TechNet Wiki governance... (yes, the bad guy that keeps rubbing your nose in the Rules and Guidelines on the TechNet Wiki...)

Instead of dumping the lists of guidelines again, this time I rather provide you with a top 5 of items that the TechNet Wiki IS NOT.

[REPRHASE: what are the most frequent violations we need to act on, as administrators, as community, as Wiki Ninjas...]

You can figure out the rest your self, right?

1. TechNet Wiki is NOT A BLOG

TechNet Wiki is a WIKI in first place, to stimulate and allow community cooperation.
I almost said there is no 'I' in Wiki... but there is (so forget that phrase)... more important Wiki first of starts with "W" as in "We", not "U".
It's key to write your articles in 3rd person, an open format that allows the community to work on it with you.

2. TechNet Wiki is NOT a PERSONAL BLOG

Wiki first of starts with "W" as in "We", not "U"

YOU DO NOT OWN the content on TechNet Wiki, the community does.
BUT YOU DO GET the credits.

3. TechNet Wiki is NOT a link-through VIDEO PLATFORM 

It's essential to add essential, searchable content to the TechNet Wiki... Step by step descriptions on the TechNet Wiki are great value.

4. TechNet Wiki must not be used to DUPLICATE or PIRATE content

Supported by international legislations, Terms Of use of the community platforms, the rules & guidelines of TechNet Wiki...

In short: DO NOT POST content you DO NOT OWN.

5. TechNet Wiki is NOT a NEWS and MARKETING platform.

Short term value message, single point of view articles, product marketing articles should not be posted on TechNet Wiki....

Pretty neat list to put in your New Year's resolutions, right?

Welcome to another year of TechNet Wiki... it's going to be very exciting!

クラウド ベースのサービスを構築するための 3 つのステップ 【1/11更新】

$
0
0

(この記事は 2015 年11 月 16 日に Microsoft Partner Network blog に掲載された記事 Three steps for building your offerings in the cloudの翻訳です。最新情報についてはリンク元のページをご参照ください。)



クラウド マネージド サービスの提供を始めるには、それなりの覚悟が必要です。マネージド サービスに関する既存のビジネス モデルを大幅に変更しなければならない場合はなおさらです。


では、クラウド マネージド サービスの提供を決意したら、何から手を付ければよいのでしょうか。
今回、Nuvolex 社 (英語) CEO の Brian Hamel 氏に、同社がパートナー企業に提供しているソリューションについてのお話を伺いました。そこで、クラウド マネージド サービスへと効果的にビジネスを転換するための 3 ステップ アプローチについて教えていただきました。


ステップ 1: Exchange から Office 365 への移行サービスを提案する


オンプレミス環境からクラウドに移行する際、ほとんどの企業が真っ先に移行対象にするアプリケーションは何だと思いますか。
電子メールです。
Hamel 氏によれば、電子メールの移行は、クラウド サービス プロバイダー (CSP) にとって間違いなく簡単に達成できるタスクであり、パブリック クラウドを活用したマネージド サービス ビジネスの手始めとしては最適だそうです。この分野に足を踏み入れるなら、オンプレミスの Exchange を Office 365 に移行するサービスから始めましょう。


ステップ 2: Office 365 マネージド サービスをバンドルで提供する


電子メールの移行サービスについて顧客に売り込むときには、電子メール テナントの継続管理サービスも提供できることも伝えます。Hamel 氏いわく、Office 365 の Exchange 管理は最初に提供するサービスとして最適です。顧客は心の奥では、できればテナントの管理を担当したくないと思っているため、その抵抗感に初めての商談の段階から訴えかけると大いに有効なのです。担当者を悩ませる管理作業を、Office 365 ライセンスの数分の 1 のコストで請け負うことを提案しましょう。

Nuvolex がビジネスへの着手を支援
Nuvolex プラットフォームは Office 365 で何度も繰り返し実行する作業の自動化と、大幅なユーザー エクスペリエンスの簡易化に重点を置いています。これこそが、マネージド サービス プロバイダーが Office 365 マネージド サービスをコスト効率の高い方法で提供する鍵なのです。


Nuvolex では、ほとんどのサービスを自動化し、きわめて直観的なユーザー インターフェイスを備えたマルチテナント Office 365 管理プラットフォームを提供しています。管理者は必要なテナントとユーザーの数を簡単にプロビジョニングし、任意の数のライセンスおよび Exchange 属性を割り当てる作業を、文字どおり数クリックで実行できます。以前なら、手間のかかる作業を自動化するためには、経験を積んだエンジニアが複雑なスクリプトを書く必要がありました。しかし今では基本的な知識さえあれば、このプラットフォームを使用して Office 365 Exchange のすべての管理作業に対処できます。これにより、サービスの提供コストが大幅に抑えられ、営業利益が向上します。


Nuvolex のプラットフォームを使用すると、コストを抑えながら、顧客の電子メール テナントの継続的な管理も可能になります。パートナー企業の管理サービスと Microsoft Online Services を 1 つの統合サービスとして提供、課金すれば、顧客側の承認もスムーズに進みます。


さらに、同社のプラットフォームは 14 か国語に対応しています。グローバルなプラットフォームで、世界の最もコスト効率の高い地域から 24 時間 365 日体制でサポートを提供することにより、運用コストがさらに抑えられ、サービスの利益が向上します。


Nuvolex プラットフォームにはもう 1 つメリットがあります。一般法人、教育機関、行政機関など、これまでターゲットにしてきた組織よりもさらに大規模に Office 365 を利用する組織に対して、自社ブランドでプラットフォームを再販できる点です。IT 部門を備えた大きな組織では、完全なマネージド サービスを提供する必要がありません。こうした顧客に対しては、Nuvolex プラットフォームの柔軟性を活用し、"セルフ管理型" ポータルを提供するとよいでしょう。Nuvolex のポータルならではの高度な自動化とエンタープライズ クラスの各機能を通じて、膨大なユーザー ベースの管理を簡易化すると共に、運用に不可欠な使用状況レポートや、監査証跡のサービスを提供することもできます。このソリューションは、エンタープライズ規模の顧客にとってコスト メリットが非常に高いと同時に、パートナー企業にとっても利益率の高いビジネスとなります。


ステップ 3: ビジネスを定着させて拡大する
移行サービスとは異なり、Office 365 のマネージド サービスは年間契約なので、ビジネス上重要性が高い定期収入をもたらします。さらに、顧客と継続的な関係を築ける点もメリットとなります。

顧客は既に、新しいソリューションを模索しています。時代が変化するペースに合わせて、創造的かつ適応性の高いビジネス モデルを構築していくことが重要です。Nuvolex のような企業との連携は、マネージド サービス ビジネスに着手するうえで役立つだけでなく、将来的に収益性の大幅な向上と事業拡大への道も拓いてくれます。今後、Nuvolex はプラットフォームの機能やしくみを強化し、パートナー企業が顧客に提供できるサービスも充実していきます。
Nuvolex の詳細、および同社が提供するクラウド マネージド サービスの運営ソリューションについては、Nuvolex 社のサイト (英語)を参照してください。

#besserlernen mit digitalen Technologien – ein Gastbeitrag von Volker Jürgens, Geschäftsführer von AixConcept

$
0
0

„Wie können wir mit digitalen Technologien #besserlernen?“ Unter diesem Motto hatte Stefan Schick zu einer Gastbeitragsserie aufgerufen. Volker Jürgens meint dazu, technische Ausstattung der Schulen und Weiterbildung der Lehrer müssen Hand in Hand gehen.


Fortbildung ausbauen, Skepsis abbauen: So gelingt die Digitalisierung im Klassenzimmer. Als IT-Dienstleister haben wir bundesweit bereits mehr als 1.400 Schulen zum Thema IT beraten, ausgestattet oder dauerhaft betreut. Unsere Wahrnehmung ist, dass neben einem baulichen Renovierungsstau zusätzlich ein erheblicher Renovierungsstau im IT-Bereich festzumachen ist. Beide Themen sind eng miteinander verknüpft: Nur eine moderne, leistungsfähige und lernfördernde Infrastruktur motiviert Lehrer wie Schüler.


Eine mangelhafte, fehlerhafte, nicht funktionierende technische Ausstattung hingegen lässt die Bereitschaft sinken, sich überhaupt um den Einsatz von IT im Schulalltag zu bemühen. So sollte eine WLAN-Ausstattung alle Bereiche der Schule einbinden, um nur ein Beispiel zu nennen. Zusätzlich müssen Budgets so geschaffen sein, dass eine kontinuierliche Entwicklung einer Schul-IT möglich ist. Damit ist zumindest schon einmal eine Grundvoraussetzung zum Einsatz von IT geschaffen.

Im zweiten Schritt müssen Lehrer wesentlich besser geschult werden. Nur wer IT als hilfreiches Werkzeug begreift, setzt sie im Alltag auch ein. Wir fangen daher bei den grundlegenden Strukturen an. Denn ohne Planung und Beratung läuft erst einmal gar nichts. Das gilt auch für Tablet-Projekte welcher Art auch immer. Wichtig ist, den Bestand mitzunehmen.  Das ist in der Regel der klassische Server mit den angeschlossenen Clients in der Schule. Wir arbeiten dann darauf hin, dass neben den mobilen Endgeräten auch Cloudanwendungen eingebunden werden können. Das ist kein Hexenwerk, muss aber aus der Planung heraus so entwickelt werden, damit zur Verfügung stehende Gelder zukunftsfähig eingesetzt werden.

Grundsätzlich gibt es zwei Wege. Entweder stellen die Schulen die Geräte (Tablets etc.) oder die Schüler bringen ihre eigenen mit. Schuleigene Geräte haben den Vorteil, dass man sie zentral managen kann und damit auf einem einheitlichen Stand halten kann (Softwareausstattung, Betriebssystem, ...). Schülereigene Geräte - ein Kollege formulierte das nett mit „Bring your own Desaster“ - können in einem WLAN schon ein wenig mehr Probleme bereiten, da wir dort auf unterschiedliche Betriebssysteme treffen mit unterschiedlichsten Releases des jeweiligen Betriebssystems. Zu empfehlen wäre, auf eine möglichst einheitliche Ausstattung zu setzen. Endgeräte mit einem aktuellen MS-Betriebssystem lassen sich sehr gut in die zurzeit vorhandene Schul-IT einbinden. Wir setzen deshalb auf unsere „MNSpro“-Lösung, die auf den Microsoft-Produkten basiert. Dies garantiert die beste Daten- und Betriebssicherheit.

Schon jetzt wird deutlich, und auch schon mit MNSpro umgesetzt, dass zentral gehostete Strukturen bisherige Szenarien mit Servern in der Schule ablösen. Was in Deutschland fehlt, ist der von der Politik versprochene Breitbandausbau. Ob wir im internationalen Vergleich in den nächsten Jahren aufholen, hängt davon ab, ob wir es schaffen, Schule vernünftig auszustatten, Fortbildungen anzubieten und die Skepsis gegenüber IT in der Schule bei den Lehrern abzubauen.

 

 

 

Ein Gastbeitrag von Volker Jürgens
Geschäftsführer von AixConcept

- - - -

Über den Autor


Volker Jürgens ist Geschäftsführer der AixConcept GmbH, die als Dienstleister seit 13 Jahren die gesamte Schult-IT von mehr als 1400 Schulen steuert und unterstützt. Im Ausschuss Bildung und Technologie des Didacta-Verbandes ist er als Vorsitzender tätig.

- - - -

„Besser lernen. Für alle!“ – Microsofts Engagement im Bildungswesen

Bildung ist der Schlüssel zur Teilhabe am gesellschaftlichen Leben und Erfolg im Beruf. Microsoft engagiert sich mit zahlreichen Bildungsprojekten, Förderprogrammen für Schulen, IT-Plattformen zur Vernetzung von Forschung und Lehre und gezielter Nachwuchsförderung im Technologiebereich seit Jahren für Bildung. Mit der Plattform „Besser lernen. Für alle.“ bündelt Microsoft sein breites Bildungsangebot. Von der frühkindlichen Erziehung im Kindergarten, über die Grundschule bis hin zur Hochschule und beruflichen Weiterbildung: Hier finden Sie Informationen zu allen Stationen des lebenslangen Lernens sowie neuen Medien und modernen Lernkonzepten. Weiterführende Informationen zu Microsofts Engagement im Bildungswesen sind unter aka.ms/besserlernen zu finden.

Weitere Gastbeiträge zu #besserlernen:

12 января - Modern Workplace: Как успешно настроить работу удаленных сотрудников

$
0
0

Присоединяйтесь к просмотру нового эпизода ModernWorkplace«Рабочее место где угодно: как найти правильный баланс для вашей организации», который состоится12 января в 18:00 (МСК). В этом эпизоде эксперты отрасли разберут плюсы и минусы мобильной работы в сравнении с работой в офисе.

...(read more)

Er du den næste danmarksmester i Excel?

$
0
0

Microsoft søger deltagere til det første danmarksmesterskab i Excel nogensinde. Det dystes bl.a. i disciplinerne 100 meters regneark, grafballet og formel-wrestling.

Læs mere eller tilmeld dig her.

I 30 år har Excel hjulpet mennesker verden over med at danne overblik over selv de mest uoverskuelige datamængder. I over 30 år har Excel været en kærkommen hjælper, når grafer har skullet afsløre systemerne bag tallene. Gennem 30 år har Excel udviklet sig fra et simpelt regneark til en klenodie af et program. 

Det skal fejres! Og hvad er bedre end at fejre fødselsdagen sammen med dem, der kender Excel til hudløshed? Derfor inviterer vi alle interesserede til at dyste i Danmarksmesterskabet i Excel. 

Måske bruger du selv Excel bedre end selv de skarpeste analytikere på Wall Street. Det kan også være, du er skolens excel-haj, eller den der gerne giver kollegaerne en omgang excel-prygl. Uanset om Excel danner overblikket over din privatøkonomi eller er et nødvendigt arbejdsredskab i din dagligdag, er du velkommen til at deltage. 

DM i Excel er for alle dem, der danner mønstre frem for uorden. Dem, der på et splitsekund skaber overblikket i forældede og uorganiserede adresselister. DM i Excel er for alle.

Vi stiller hverken krav alder eller kvaliteter. Det vigtigste er, at du har lyst til vise dine Excel-evner.  

Sådan foregår det

Mesterskabskampen foregår i tre faser:

Den 25. januar dyster alle tilmeldte i den indledende dyst, der foregår online. Du kan sidde hvor som helst og deltage, så længe du har netadgang.

De bedste 20 går videre til anden runde den 1. februar. Det foregår også online, og den eneste forskel fra indledende runde er, at niveauet er hævet lidt.

Den 11. februar er det finaletid. Kun de tre bedste går videre. Finalen foregår hos os i Lyngby fra 17-19. Her vil radio- og tv-værten Sisse Sejr Nørregard kommentere konkurrencen via Skype Broadcasting, hvor vi livestreamer hele finalen, mens tre dommere bedømmer finalisternes indsats.

Dommerne er Kim Friis Laursen fra 4D A/S, Henrik Zacher Molbech fra Inspari, der er Excel og Power BI evangelist/specialist, og Bo Kaaber Brandt, der er Market Intelligence Manager I Western Europe Microsoft.

Tilmelding:

Tilmeld dig her.

Wiki Life Hack: the 2016 New year's resolution of a Wiki article starter (The TechNet Wiki and what it IS)

$
0
0

With the new fresh start of 2016, I'm quickly jumping in with an extra surprise post.

But before I kick off, allow me to wish everyone Best Wishes for 2016, lots of success on the TNWiki and a productive year of Wiki articles and blogs!

As before, I'm focusing on the TechNet Wiki governance... (yes, the good guy that keeps rubbing your nose in the positive approach of Rules and Guidelines on the TechNet Wiki...)

Now dumping the lists of guidelines again, to provide you with a top 5 of items that the TechNet Wiki IS and how to make it better for everyone.

[REPRHASE: what are the most frequent pitfalls you need to avoid as Wiki Ninja...]

You don't need to figure it out your self, we've collected the most important items for you.

But after the new year's party, it might be useful to check back on the rules and guidelines that help you to publish the content we need.

1. TechNet Wiki is A WIKI

TechNet Wiki is a WIKI in first place, to stimulate and allow community cooperation.
It's key to write your articles in 3rd person, an open format that allows the community to work on it with you.

2. TechNet Wiki is COMMUNITY

Wiki first of starts with "W" as in "We", not "U"

YOU DO GET the credits, but the content is owned by the community.

Source: (*)

  • As Ana mentioned in this blog post  , Wikipedia has this to say about signatures: "When editing a page, main namespace articles should not be signed, because the article is a shared work, based on the contributions of many people, and one editor should not be singled out above others." - Wikipedia: Signatures
  • If you're pasting from your blog, where you use first person, you and the community should change it to third person instead.
  • Remove any unnecessary personal commentary that might have been in the blog post.
  • More guidance in Wiki: User Experience Guidelines

3. TechNet Wiki is about MEANINGFUL CONTENT. 

It's essential to add essential, searchable content to the TechNet Wiki... Step by step descriptions on the TechNet Wiki are great value.

Source: (*)

  • Articles must contain a original, valuable collection of information, which means articles with only a hyperlink to 3rd party information do not comply to the BPOV.
  • This also includes articles with a single link (or embedded link) to a video hosted on another platform.
  • A 1-liner introduction, or a simple introduction paragraph is not sufficient to add value to a video hyperlink. (See more: Wiki Governance: Guideline on Publishing Videos)
  • Same guideline applies to a single link or screenshot of other sources (like link to TechNet gallery) without any additional explanation
  • Also applies to Wiki articles with only screenshots, external links, ... You must provide an explanation of the screenshots, details of the step-by-step procedure...

4. TechNet Wiki is about respect.

Supported by international legislations, Terms Of use of the community platforms, the rules & guidelines of TechNet Wiki...

In short: ONLY POST content you OWN.
Be smart, be original! We do support and encourage people to participate to make interesting contributions...  

Source: (*)

  • Someone else has relevant content and you want to help them share it with the world. It's a nice thought, but do not copy other people's content to the Wiki, even if the owner said it was OK.
  • If you are writing an article taking reference from already existing article and adding some important points and views which would really give value to the topic such articles are welcome but you must include reference to original article otherwise it would be seen as plagiarized content.
  • More detailed guidance on piracy is available at: http://aka.ms/wiki_piracy
  • Guidance on providing proper source references and credits is here
    • Wiki: Best practices for source references and quotes

5. TechNet Wiki is about long term value.

Short term value message, single point of view articles, product marketing articles should not be posted on TechNet Wiki...

Invest your time in valuable content that can withstand the pressure of time for a while.

We all know the products come and go... usually with a cycle of 5 years...so plenty of time to shine.

Need more info, check the right-hand side menu on the TNWiki landing page

And also

Pretty neat list to put in your New Year's resolutions, right?

Welcome to another year of TechNet Wiki... it's going to be very exciting!


Enterprise Mobility Suite

$
0
0
Nejen o bezpečnosti mobilních zařízení IT oddělení každé společnosti musí čelit výzvám dnešní moderní doby. Výzvám, které se stávají stále složitější a komplexnější, ale o to více je na ně vyžadována rychlá reakce a přizpůsobení se nejen ze strany samotného IT oddělení, ale především technologií...(read more)

Domingo - Wiki Life - Conteúdos sobre Desenvolvimento no TechNet Wiki

$
0
0

Sejam muito bem-vindos à mais uma Wiki Life.

A ideia hoje é apresentar a seção de Desenvolvimento do TechNet Wiki.

Contando com referências para artigos em português produzidos por diversos membros da comunidade, este espaço colaborativo é um excelente guia para a busca de informações sobre a tecnologia .NET e o Visual Studio em geral. Dentre as tecnologias que constam nesta seção estão posts sobre:

  • C#;
  • ASP.NET;
  • ASP.NET MVC;
  • ASP.NET Web API;
  • Entity Framework.

Para ter acesso a tais conteúdos clique no link abaixo:

http://social.technet.microsoft.com/wiki/pt-br/contents/articles/9450.desenvolvimento.aspx

É importante frisar ainda que todos aqueles que querem contribuir com conteúdo sobre desenvolvimento podem editar o post desta seção, de forma a referenciar os seus respectivos artigos. Com isto se cria um local centralizado, que facilitará a busca por informações sobre as tecnologias aqui mencionadas.

Caso tenha interesse em contribuir com o TechNet Wiki, acesse o link abaixo para obter maiores esclarecimentos:

http://blogs.technet.com/b/wikininjasbr/archive/2016/01/06/quarta-feira-wiki-life-como-usar-o-editor-do-technet-wiki.aspx

E por hoje é isso... Até a próxima!

   

Até a próxima!

Wiki Ninja Renato Groffe (Wiki, Facebook, LinkedIn, MSDN)

PowerTip: Identify PowerShell Version

$
0
0

Summary: Learn how to easily find the version of Windows PowerShell, CLR, and WSMan.

Hey, Scripting Guy! Question How can I use Windows PowerShell to identify the version of Windows PowerShell that is running on my system?

Hey, Scripting Guy! Answer Use the $PSversionTable automatic variable (you can use Tab expansion to avoid some typing).
           The command and output are shown here:

Image of command output

Using the Windows Store for Business with MDT 2013

$
0
0

The Windows Store for Business was made available to everyone back in November, corresponding to the Windows 10 version 1511 feature upgrade that was released at the same time.  For those that aren't familiar with the Windows Store for Business capabilities, it provides organizations the ability to acquire apps for use throughout their organizations, and in some scenarios, also to distribute those apps.

There are two types of licenses that are available through the Windows Store for Business:

  • Online, tied to an Azure Active Directory account.  This only supports per-user installation of the apps, and licenses are tracked and managed by the Windows Store for Business.
  • Offline, where no Azure Active Directory is needed or used.  This supports per-user installation (regardless of the account type) as well as per-machine provisioning (where the app automatically installs for each user when they log onto the PC), and there is no license tracking.

In the case of MDT, it supports per-machine provisioning of apps, and as of MDT 2013 Update 1 it understands how to provision apps from the Windows Store for Business.  The main difference between store apps and sideloaded apps: a license file provided by the Windows Store for Business, allowing the app to be installed or provisioned on a machine, without even needing to enable sideloading.

For those of you who aren’t familiar with MDT’s ability to sideload apps, this has been in MDT since the Windows 8 timeframe, but the documentation is lacking.  To summarize, you need to have the app files, including dependencies, in the needed folder structure.  For example, you could import this folder structure into MDT as a new application, specifying the name of the main .appx file as the command line for the app:

  • MyApp
    • MyApp.appx
    • Dependencies
      • x86
        • MyDependency.appx
      • x64
        • MyDependency.appx

After importing this into MDT (creating an app with source files, specifying the location of the MyApp folder, and specifying a command line of “MyApp.appx”), you could then select that app for provisioning during a task sequence; MDT would automatically create the needed DISM command line to provision the app so that you don’t need to work out that very long command line yourself.

So where does this folder structure come from?  Simple, it’s what Visual Studio creates when you build an app.  So your developers can just provide you with a copy of that output folder and you’re set.  But there’s a little more work needed with the Windows Store for Business:  It will provide you all the files that you need, but you need to download them individually and then place them into the needed folder structure manually, before adding the result to MDT.

Let’s look at a real example.  Once I sign into the Windows Store for Business from http://www.microsoft.com/business-store, I can mange my inventory of apps and filter it to just the “offline” licensed apps, since these are the ones I could put into my MDT image:

image

Let’s assume I want everyone to have Onefootball when they first log into Windows 10.  (It could happen, maybe they work for a European football club.)  When you select that app, you can see the individual files that you need to download, including a license file:

image

So download the package itself (selecting the x64 architecture, so that you get the files for x86 and x64), an unencoded license (XML file), and each of the required frameworks.  Arrange them into a folder structure like I described above:

image

with three .appx dependency files in each of the x86 and x64 folders.  Then import that into MDT as a new app with source files:

image

specifying the name of the .appx file as the command line (the rest of the name scrolls off the left side for this app):

image

Then when deploying you can select the app:

image

Then once I log on as a normal user (not the Administrator) the app shows up on the Start menu:

image

and launches just fine:

image

So you can do that with any offline app available in the Windows Store for Business, just like you can do it with any line of business app.  (MDT will enable sideloading automatically for the LOB app, but that isn’t necessary for the Windows Store app, as the license file means sideloading isn’t needed.)

To make this process easier, the Configuration Manager and Intune teams, as well as other management tool vendors, are working on leveraging the Windows Store for Business APIs to make this whole process as easy as checking a few boxes, so stay tuned for that.

Introduction to Advanced PowerShell Functions

$
0
0

Summary: Guest blogger, Microsoft MVP, Adam Bertram, talks about advanced Windows PowerShell functions.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have a guest post by Microsoft MVP, Adam Bertram. Take it away Adam…

I'm a Pluralsight author and I develop online training courses that are mostly about Windows PowerShell. I recently had the opportunity to develop a course about building advanced PowerShell functions and modules, and a big topic in that course was about advanced functions.

With that topic fresh on my mind, I thought how to build advanced functions would make a great post for the Hey, Scripting Guy! Blog. So without further ado, let's start taking your PowerShell skills to the next level by going from basic functions to advanced functions!

In the PowerShell world, there are two kinds of functions: basic and advanced. If you're starting with PowerShell, you're probably using basic functions. This kind of function is what you'd typically think of with a traditional programming language function. It's simply a group of code that can be executed autonomously with optional input parameters and an optional output. In essence, basic functions are a great way to prevent having to duplicate your code by building a small framework of code that you can simply point to and execute.

For example, a basic function in PowerShell might look something like this:

Function Get-Something {

            Param($item)

            Write-Host "You passed the parameter $item into the function"

}

This is an extremely simple example of a basic function with a single parameter called $item. This function will output a statement to the console replacing $item with the runtime value that $item represents. If I call this function by doing something like this:

PS> Get-Something –item 'abc123'

I would get an output that looks like this:

PS> You passed the parameter abc123 into the function

This is the basic premise of a function. It's simply a group of code that can be called. However, there's another kind of function in PowerShell that is called "advanced." Advanced functions inherit all the functionality of basic functions, but they give you so much more functionality.

Advanced is the concept of cmdlets. When learning PowerShell, you might see the word cmdlet generically tossed around to describe any of the various commands that you can execute in your console, for example: Get-Content, Get-Service, or Test-Connection. Sometimes you'll see people downloading PS1 scripts or modules from the Internet, running the functions, and referring to them as cmdlets—which isn't the correct terminology.

Cmdlets are not functions. They are separate entities. Cmdlets are created by software developers in languages other than PowerShell, such as C# or .NET. Cmdlets are then compiled into binary form, which allows us non-developers to use them. This is why there's no official Get-Content PS1 script or Test-Connection PS1 script. These cmdlets aren't simply plain-text files. In actuality, if you look into one, it would probably look something like this:

Aldjf;aliu0paouidjf klj*&(&*^&PDLJF:LKJ:LDKFMKM"J"JKDF

You get the point—plus I almost accidently lost my work randomly hitting keys.

Cmdlets consist of compiled, machine-readable code. Functions, on the other hand, are expressed in PowerShell and mere mortals can write and read them in a simple plain-text editor. Why am I talking this much about cmdlets when this is an article about advanced functions? It's because whenever you make an advanced function, it inherits all of the capabilities of those compiled cmdlets.

For example, with any of the compiled cmdlets, you have the ability to accept pipeline input, validate parameters, and use any of the default parameters, for example –Verbose, -ErrorAction, or -WarningVariable. Advanced functions, like cmdlets, have a lot of built-in functionality that you don't get with a simple basic function.

To make a function "advanced" is easier than you can probably imagine. We simply need to append a simple keyword reference called [CmdletBinding()] under the function declaration:

Function Get-Something {

            [CmdletBinding()]          

Param($item)

            Write-Host "You passed the parameter $item into the function"

}

Voila! Our basic function has advanced, and it is now all grown up. I think I just shed a tear. They grow up so fast, don't they? Anyway, Get-Something is now an advanced function. So what? What does that give me that I didn't have before? The answer is, “A ton!”

Making your function advanced opens up a whole new world of options. This allows you to build all the functionality of the cmdlets (which you probably used previously) into your own home-grown functions.

For example, maybe I want to only output verbose messages to the console for logging purposes if I use the built-in –Verbose parameter. If I try to create a basic function without the CmdletBinding() keyword and append –Verbose to Get-Something, it does absolutely nothing. This is because a basic function doesn't have any of the built-in parameters that advanced functions and cmdlets do.

Let's change my previous reference of Write-Host to Write-Verbose and try it now:

Function Get-Something {

            [CmdletBinding()]          

Param($item)

            Write-Verbose "You passed the parameter $item into the function"

}

PS> Get-Something –Verbose

PS> VERBOSE: You passed the parameter $item into the function

You'll see that the function now understands verbose output. This is also true for Write-Warning and Write-Error. You now have the ability to output different streams to indicate various event severity levels in your function.

Have you ever wonder how those cmdlets that you've been stringing together work? Now you can find out by building functions to accept pipeline input. Perhaps you'd like your function to accept Windows service names directly from the pipeline and take that output into a $Name parameter:

Function Get-Something {

            [CmdletBinding()]

            Param(

                        [Parameter(ValueFromPipelineByPropertyName)]

$Name

)

process {

            Write-Host "You passed the parameter $Name into the function"

            }

}

By changing the $item parameter to $Name, using the Parameter() keyword with the ValueFromPipelineByPropertyName parameter attribute, and adding a process block into the function, we can now send objects from Get-Service directly to Get-Something as we'd expect:

PS> Get-Service | Get-Something

PS> You passed the parameter service1 into the function

PS> You passed the parameter service2 into the function

PS> You passed the parameter service3 into the function

PS> You passed the parameter service4 into the function

Have you ever played it safe when running a sensitive command on various cmdlets and used the –WhatIf or –Confirm parameters? These parameters allow you to perform a "test run" of an advanced function or cmdlet to see what it would actually do if it ran.

For example, maybe I have a function that removes a bunch of files from my computer. I'd rather not risk removing some sensitive files, so I can now add a little bit of code to my function to account for the –WhatIf parameter:

Function Remove-Something {

            [CmdletBinding(SupportsShouldProcess)]

            Param(

                        [Parameter(ValueFromPipelineByPropertyName)]

$File

)

if ($PSCmdlet.ShouldProcess($File)) {

            Remove-Item –Path $File –Confirm:$false

}

}

Notice that I had to add the SupportsShouldProcess string inside of the CmdletBinding() parentheses. I also added an If statement to differentiate if the –WhatIf parameter was used. However, notice that I don't have a –WhatIf parameter. I simply have a –WhatIf parameter. This isn't possible with a basic function. –WhatIf functionality is built-in to all advanced functions.

If you got some value from this information and would like to dive deeper into PowerShell, advanced functions, and modules, be sure to check out my Pluralsight course about building advanced PowerShell functions and modules. I take you from an introduction to advanced functions, and I explain in detail nearly every feature that you now have at your disposal.

~Adam

Thank you, Adam. That is a really helpful introduction. Join us tomorrow when Adam will talk about accepting pipeline input into Windows PowerShell functions. It is a really useful technique.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Viewing all 17778 articles
Browse latest View live




Latest Images