Quantcast
Channel: TechNet Blogs
Viewing all 17778 articles
Browse latest View live

Azure: ExpressRoute Dedicated Networking, Web Site Backup Restore, Mobile Services .NET support, Hadoop 2.2, and more

$
0
0

This week we released a massive amount of enhancements to Windows Azure.  Today’s new capabilities and announcements include:

  • ExpressRoute: Dedicated, private, high-throughput network connectivity with on-premises
  • Web Sites: Backup and Restore Support
  • Mobile Services: .NET support, Notification Hub Integration, PhoneGap support
  • HDInsight: Hadoop 2.2 support
  • Management: Co-admin limit increased from 10->200 users
  • Monitoring: Service Outage Notifications Integrated within Management Portal
  • Virtual Machines: VM Agent and Background Information Support
  • Active Directory: More SaaS apps, more reports, self-service group management
  • BizTalk Services: EDIFACT protocol support, Service Bus Integration, Backup and Restore

More can be read over on Scott Guthrie’s Blog


"Build" a Windows Phone App in 30 minutes using only your web browser

$
0
0

It is incredibly easy to build a real Windows Phone application.

So easy, you could do it with your kids, this weekend, in less than an hour. (30 minutes if you aren't doing chocolate milk & cookie breaks while you work).

HUGE CAVEAT: Be realistic, it wont be the next Angry Birds, it probably wont make you rich. The next Angry Birds is probably not going to happen in an hour, without tools, in your browser, drinking chocolate milk and eating cookies. But, your application will look slick and it will be available in the store to show your mum/dad/wife/etc very soon after you finish it.

If you have a small business, an application is a great way to promote it.

[By the way, I completely understand that this isn't a Kernel topic at all. I just wanted to share the experience in case anyone else was up for an hour of fun that has nothing to do with our day job.]

First step - Go to this site:

http://appstudio.windowsphone.com/

In truth, that's very close to all you need from this post. You essentially follow the prompts from that point onwards. But let me share the application my five year old son and I just built.

The application is just a simple interface to this site. (I couldn't think of anything more creative this high up in the stack).

The site:

Log in, and then choose a template for your application...

I just choose blank. The UI to drag and drop content from the blank template is very easy. But, if you would prefer, there are templates for restaurant menus, fitness plans, goals, all sorts of stuff.

Once you choose a template you can select where to pull data from. In my case, this blog website using RSS. If you want, you can just add static data as well and just republish the application when you want to change it.

Then you start to navigate through the bits and pieces about colors, fonts etc.

(Yes, i know this blog is really, boringly simple... i just wanted to share how easy it actually is to get something in the store you can have your friends download and use).

Once you have colors, content, layout done all you need to do is choose some publishing information and whether you want ads in your application (to make some cash) and boom, you are just about done.

Choose finish and you will be prompted on which type of application you want to generate.

(I chose Windows Phone 8 for simplicity)

Right at the end, after your application has finished generating it gets slightly trickier if you haven't published an application before. Its not a big deal, it is because you need to know how to take the application package and move it to the Windows store.

Its not difficult, just something you mess about with once and then its pretty simple from that point onwards.

(If you want to,you can install the application on your phone, or friends phones by providing a certificate rather than using the store at all. But to get it in the store, you have to download the publish package and submit it to the store as an application.)

To get the application into the store, download the "publish package":

Once you have the package, go to the phone developer site.

http://developer.windowsphone.com/en-us

There is a link that says "submit an app"

You will see a heap of options, but you can scroll past most of them and down the bottom you will be prompted to "agree to continue" (of course, read all the terms etc. first).

If this is your first time developing something for the Windows Store you might be prompted to set up some account details.

Once you are done, you will arrive at this screen:

Once again, click "submit app" and then start to fill out the details of the application.

Eventually, you will be prompted to provide the package you downloaded - the "publish package".

Mine looks like this:

You will choose the markets it should go to (Australia, US, Canada etc.), and then type of application category your application belongs to. There are few other details but it is all fairly intuitive stuff.

Once you answer all the questions, you are done:

Within a few hours, you should have an application to try out.

I will upload mine on this blog once it hits the store.

Angry Birds riches here i come!

Thanks,

Chad

Scaling out RDS in Windows Azure

$
0
0

Freek Berson a Remote Desktop Services (RDS) MVP developed this concept model on driving a solution for RDS in Azure. The question we ask when discussing this topic with Freek was: What if you wanted to get all the great enhancements to Remote Desktop Services in Windows Server 2012 R2 to securely enable remote users, but could scale up or down to meet changing demand—without having to pay for the excess capacity unless you were using it? Freek has been working on a proof of concept (POC) with one of his customers to accomplish exactly that.

Why Am I doing This?

The customer needed a solution to be able to easily scale up the amount of RD Session Host Servers during a specific period of the month, and scale down during the rest of the month to save resources they weren’t using. They also needed this solution to be simple—it shouldn’t matter to the operation of the overall deployment whether it was running at full capacity or not. Adding capacity to a RD Session Host server farm is conceptually simple: if you add more RD Session Host servers to the farm, then the RD Connection Broker will distribute incoming connections based on server load. If the farm is homogenous (the model assumed in RDS) it really doesn’t matter how big it is so long as sufficient capacity exists to meet user demand. In the following diagram you will get a basic layout of what this solution looks like:

image

Figure 1. Basic infrastructure diagram

There’s a hitch, though: if you scale up and down in the private cloud, you still have to support and host the infrastructure. What if we could use Windows Azure to host the spare RD Session Host server capacity?

Running RDS in Windows Azure might seem like an expensive solution, since his testing show an average maximum of 36 concurrent users running a medium workload on an Extra Large Windows Azure VM. However, one of the advantages of Windows Azure is that you pay for your resources per minute. You don’t pay for VMs that are switched off and deallocated, so hosting spare capacity not only makes the deployment more flexible but could potentially lower costs.

Can you even Host RDS in Azure?

RDS on Azure is a relatively new option. If you’re familiar with his personal blog you’ll know that he’s been doing a lot of testing on this subject since the early days of Windows Azure. These are some of the key developments.

  • In September 2012 he set up a first lab environment running a very basic RDS deployment in Windows Azure. “At that time, Microsoft did not support running any RDS role in Windows Azure yet; I just wanted to see if it was technically doable.” Freek said.
  • On July 1st 2013, Microsoft updated the Product Use Rights to allow running session-based RDS roles on Windows Azure. However, licensing back then was only possible by using Subscriber Access Licenses (SALs) purchased through the Microsoft Service Provider Licensing Agreement (SPLA). To get insights in to what kind of workloads the various types of Azure Virtual Machines would be able to handle, Freek performed a first performance test running Login VSI Medium workloads.
  • In September 2013, he performed some additional testing in regards to publishing High Available load balanced environments in Windows Azure and adding additional Management Suites
  • Recently Microsoft announced that from 2014 you can use RDS CALs (Client Access Licenses) instead of SALs) to connect to Azure-hosted RD Session Host servers.

Making this work

“For the POC I only deployed RD Session Host servers in Windows Azure, because my goal is to create a flexible way to scale RD Session Host server capacity with demand. For simplicity, all the other roles, such RD Connection Broker, RD Web Access to present a common portal for , RD Licensing etc. are still hosted on premises. However, depending on the exact customer demands the location of these roles technically should not matter as long as all servers running these roles can communicate with each other through the Windows Azure VPN.” Freek commented.  Using the VPN means that he didn’t need to use an instance of RD Gateway for the servers running in Azure, although if needed, technically you still could

“One of my key goals was flexible scale, and there is no tool in the UI to automatically scale up and down RD Session Host servers in a deployment. However, since RDS in Windows Server 2012 and later is fully manageable using PowerShell I could write a script to detect load, drain users from underutilized servers, and then (once no one was logged in) power down and deallocate the servers so I wouldn’t get charged for them. Because this customer’s demand increases at a predictable time, I could run a second script slightly prior to that time to provision more RD Session Host servers and add them to my deployment.” He said.

The PowerShell scripts takes a few parameters (configurable in a config.ini file):

- The name of the Session Collection

- The name of the Active* RD Connection Broker

- The number of hosts you would like to be available in out-of-office hours

 

  • Since Windows Server 2012 the RD Connection Broker can be made active-active High Available. In this case Active means the RD Connection Broker that handles configuration changes (which can only be one per deployment).

clip_image002

Figure 2. Sample output of the script

A second scheduled task, scheduled to run just before we expect to need full capacity again, runs a PowerShell scripts to check which servers are offline and powers them up. Since the servers were previously set to “drain mode (until reboot)” they will automatically start accepting new sessions again.

Closing thoughts

Freek said he had three goals for this POC:

1. Automatically scale the size of the farm to meet changing demand

2. Host the spare capacity in Windows Azure so my customer would only pay for the servers when they were in use

3. Securely connect the excess capacity to the main deployment hosted on the customer site, so the user connection experience didn’t change as the farm changed.

This design meets those requirements. However, some questions remain. He will continue and need to test the application performance to make sure it remains consistent; while every application that runs on RD Session Host on premise should run as expected when hosted in Azure, I’ll want to be sure that the end user experience doesn’t change. “Licensing could also be an issue depending on the applications they use. For example, while I can use RDSCALs for the Azure-based RD Session Host servers, it’s not possible to properly license Microsoft Office running on RDS in Windows Azure without having SPLA. I’m sure that will change in the future, but I’m not sure when.” Freek stated in closing.

Running RD Session Host servers in hosted or hybrid environment won’t make sense for everyone—it’s going to depend on the specific use case. And hosting Remote Desktop Services on Windows Azure is still at an early stage. However, given the Microsoft’s emphasis on hosted workloads he is certain it’s just a matter of time until Remote Desktop Services will fully extend to Windows Azure in one form or another.

You can learn more about Freek Berson through any of the following channels:

Enjoy!

New on the TechNet Wiki: DirectAccess Survival Guide

$
0
0

DirectAccess was first introduced in Windows Server 2008 R2, and is greatly improved for Windows Server 2012 and Windows Server 2012 R2.

If you're interested in deploying DirectAccess, regardless of which of these operating systems you use, the new DirectAccess Survival Guide puts all of the Technical Library links to DA resources at your fingertips. Not only that, but the fact that this guide is on the Wiki means that you can add all of your own favorite links to DA information.

Pearson’s Windows 8 apps help personalize digital learning for students

$
0
0

In collaboration with Microsoft, Pearson – a provider of educational materials and services – takes students one step closer to achieving personalized digital learning by introducing several Windows 8 apps for schools to use during the 2014-2015 year.

As the Microsoft in Education blog reports, those apps are: Pearson’s Common Core System of Courses, TestNav, and reading apps iLit and eText.

The Common Core System of Courses is the first curriculum built for a digital personalized learning environment that is aligned to the new standards for college and career readiness. Pearson’s core reading program, iLit, offers students personalized learning support with built-in reward systems. TestNav 8 allows schools to administer tests online and on demand, and eText gives students interactive and intuitive features such navigation controls, enhanced search functions, personal highlighting, bookmarks and note-taking.

Head over to the Microsoft in Education blog to read more about Microsoft’s collaboration with Pearson and to see how these apps have already succeeded in classrooms.

You might also be interested in:

Athima Chansanchai
Microsoft News Center Staff

5 ways Windows saved a new mom’s sanity

$
0
0

With all of the craziness that goes along with balancing work, friends and taking care of a little one, moms need all the help they can get. Anna Brinkmann has five tips for how Windows can be your sanity saver. Here are a couple:

Personal Assistant: Use the Outlook calendar for any and all reminders (buy bananas!), because they find you wherever you are. Set notifications to remind you hours or days ahead of time.

Keep in Touch: Connect your Outlook account with your Facebook account so your phone will remind you about your friends’ birthdays. Send well wishes on time and amaze them with your multitasking capabilities.

Check out the Windows Experience Blog for the rest, and try Brinkmann’s tips this weekend!

You might also be interested in:

· Three Surfaces in the house help a family run a business
· Get more out of the new OneDrive app on Xbox One
· Staff App Pick: Urbanspoon for Windows and Windows Phone

Aimee Riordan
Microsoft News Center Staff

IRM (Information Rights Management) features and limitations using Office Web Apps On-Premise

$
0
0

Summary on IRM with Office Web Apps 2013

When it comes to IRM protected Office document libraries in SharePoint 2013, Office Web Apps 2013 offers read-only capability.  Office Web Apps relies on the document host system (i.e. SharePoint) to handle communication with Rights Management servers since it has no means to directly communicate with Rights Management Server.   

IRM protection precludes Office Web Apps from allowing editing of IRM protected documents.  Documents that are IRM protected through the client application (i.e. Word, PowerPoint, Excel) and stored on a document management system are not able to be opened. as well as IRM protected documents are stored on Windows Live, Facebook or other 3rd Party hosts systems.

Limitations

Office Web Apps does not support the following features normally offered for non-IRM protected documents.  These features are currently suppressed from the user interface:

  • Edit in browser
  • Print
  • Save
  • Copy selection
  • Add comments

No word on when or if these features will be available for IRM protected libraries in an upcoming release.

A novel method in IE11 for dealing with fraudulent digital certificates

$
0
0

Digital certificates are a key mechanism for establishing identity on the Internet. Trust in these certificates is a result of trusting the issuing entity - the Certification Authority (CA). Unfortunately, as a result of a number of CA related incidents over the past few years, that trust has been somewhat undermined. A number of approaches to address this lessened trust have surfaced in academia and industry, including Public Key Pinning, network notary based solutions such as Perspectives and Convergence, and making the list of issued certificates public by either requiring CAs to operate a simple web service, or supporting  more complex protocols like Certificate Transparency (CT).

Problems with Today's Certificate Trust Model

Today, browsers base trust decisions on the inclusion of roots of trust in a root store. Inclusion in that root store is usually based on factors such as WebTrust for CA or ETSI TS 102 042 audits and adherence to industry guidelines published by the CA Browser Forum for SSL certificates. Each browser vendor may specify additional technical requirements.

Microsoft requires each root CA to provide evidence of a successful audit from a qualified auditor annually. In addition, the root CA is also required to sign a contractual agreement to follow technical requirements such as the use of strong cryptographic algorithms.

In all browsers, trusted roots are effectively treated equally, and for the most part, can issue certificates for any domain name. If one CA is compromised (e.g. DigiNotar) or fails to follow its established operating procedures (TurkTrust, ANSSI), the result is often wrongly issued or fraudulent certificates that may be used in Man-In-The-Middle (MITM) attacks to spoof the identity of web sites. CAs are not infallible and when problems do arise, the CA has a very difficult task detecting all fraudulent or wrongly issued certificates quickly or at all.

Detecting fraudulent certificates (or any fraudulent cryptographic statements in general) used in MITM attacks is difficult because the attacker often erases any evidence of issuance from the compromised CA. Detecting attacks from the victim’s point of view is also difficult because the victim do not have data from the perspective of users not under attack for reference.

As the cost of computing power decreases, the likelihood of attacks against weak cryptographic algorithms has significantly increased. In May 2012, a complex piece of targeted malware known as “Flame” was identified which essentially spoofed the Windows Update channel by exploiting a Microsoft operated CA that was still using MD5 and convinced the victims to download its binaries as a security update from WU. This incident taught us that simply requiring all CAs to stop using weak cryptographic algorithms is not sufficient. We must also monitor the ecosystem closely for compliance and drive the ecosystem to switch to stronger algorithms by announcing timelines to block weak crypto algorithms from MS products far in advance.

Microsoft’s Vision for Improving the Trustworthiness of Certificates

Microsoft believes the best way to improve the security of certificates is to have the capability to detect fraudulent or wrongly issued certificates in the wild quickly.

Like the SmartScreen Filter built into Internet Explorer that is designed to warn users when they attempt to visit a phishing site, we believe monitoring the internet for fraudulent or wrongly issued certificates should be an integral part of the browsing experience. We also believe that any viable solution to improve the security of certificates cannot add more complexity or place more burden on web site operators and end users.

In Internet Explorer 11, we have extended the telemetry collected by the SmartScreen Filter to include the SSL certificates presented by web sites. We are building tools to analyze this information on our servers to build intelligence about certificates issued by every root CA trusted by IE as seen by our users around the world. Our initial goal is to flag potential MITM attacks using publicly trusted certificates that affect thousands of IE11 users. Over time, we will enhance the feature to detect attacks against a smaller number of IE users.

The following are examples of some of the scenarios where we can detect fraudulent or wrongly issued certificates using this data, in addition to detecting CAs to do not meet the technical requirements defined either in the Microsoft Root CA Program, or in the CA Browser Forum guidelines.

1. A website is using a certificate that is capable of being used as a subordinate CA. This would indicate the certificate has been issued wrongly
2. If a website suddenly presents a different certificate only to a certain region where a different CA issued the certificate. This might indicate a possible MITM attack in a specific country or region
3. There was a sudden and significant change in the fields a CA includes in certificates it issues. For example, omission or change in the OCSP responder location. This would indicate a CA was either compromised, or has not followed standard operating procedures

 

When potential fraudulent or wrongly issued certificates have been identified, we will work with the CA to identify the cause. Depending on the severity and scale of the problem, the CA could revoke the certificate using standards based certificate revocation mechanism. In addition, Microsoft may also use the Disallowed Certificate Trust List mechanism to revoke certificates that affect the security of a broad set of Microsoft customers.

Note that that the detection of homoglyphic attacks (where human is fooled due to visual similarity, such as rnyspace.com and myspace.com) and fraudulent certificates issued as a result of insider attacks are out of scope.

Transparency vs Privacy

Many customers consider internal DNS records to be sensitive information that they do not want to make public. At the same time, they may prefer to purchase certificates from public CAs for servers on their internal network where the server name is under subdomain of a public domain name that is not published in public DNS records. With more businesses permitting employees to bring your own device (BYOD) and use them on internal networks, we believe customers should have the option to purchase certificates from a public CA for internal servers without disclosing internal network information to the public.

We also believe domain registrant should have the option to monitor all certificates issued by all public CAs that contain their domain names, once the domain registrant prove domain registration. Such as service could be similar to the Smart Network Data Service (SNDS) operated by Outlook.com to allow owners of IP address space to help fight against spam. In addition, domain registrants could be notified by email when new certificates with their domain names appear in our database. The domain registrant would have the option to report suspicious certificates to us and notify the CA to revoke the suspicious certificate.

Privacy is a core component of trustworthy computing. Microsoft is committed to helping ensure users’ privacy while providing protection from unsafe websites. Telemetry submitted to the SmartScreen web service for evaluation is transmitted in encrypted format over HTTPS. The data is not stored with a user's IP address or other personally identifiable information. Because user privacy is important in all Microsoft's products and technologies, Microsoft has taken steps to help ensure that no personally identifiable information is retained or used for purposes other than improving online safety; data will not be used to identify, contact, or provide advertising to users. You can read more in our privacy statement.

Conclusion

In conclusion, with IE11, you can feel safer when browsing to your popular email or banking website.  We do this in a seamless manner for both user and trusted CAs perspective via collecting telemetry as part of user browsing activity and performing analysis on our backend servers. New certificate related activities for a domain name could be automatically reported to domain registrants who can decide whether it needs to be revoked or not. In summary, Microsoft is working hard to protect you from fraudulent or wrongly issued certificates with a solution that does not require changes to existing web site operations or the IE user experience.


Acknowledgement

Many thanks to Kelvin Yiu and Anthony Penta for co-authoring this blog post. Also, thanks to Nelly Porter, Kevin Kane, Glenn Pittaway and Magnus Nystrom for their review of this blog post. 

 

 


Top Support Solutions for Microsoft Exchange Server 2010

$
0
0

These are the top Microsoft Support solutions to the most common issues experienced using Microsoft Exchange Server 2010 (updated quarterly).

1. Solutions related to Database Availability Group (DAG):

2. Solutions related to database will not mount or is dismounted:

3. Solutions related to mobility and ActiveSync:

4. Solutions related to public folders:

5. Solutions related to client/server connection issues and delays:

Microsoft Cloud Trust Center Resources

$
0
0

TimTetrickPhoto

Tim Tetrick

I know many of you are familiar with the Office 365 Trust Center and use it on a regular basis to help address your customer’s questions and concerns around Privacy, Transparency, Security, Compliance, and Certifications.

What you may not be aware of is that Microsoft has similar Trust Centers for Windows Azure, Dynamics CRM Online, and just recently, Microsoft announced the launch of the official Windows Intune Trust Center.

Similar to the other Trust Centers, the Windows Intune Trust Center is a single source customers can go to learn more about our commitment to security, privacy, and compliance policies with Windows Intune, as well as the major certifications we have attained.  The web page, which will continue to be updated, includes information on how we conduct our security testing, provides pointers to other technical resources and whitepapers on TechNet, defines how we protect data and privacy, and answers frequently asked questions.

These resource centers should enable you to quickly and confidently address customer issues around Privacy, Transparency, Security, Compliance, and Certifications across the entire Microsoft Cloud family.

Office 365 Trust Center

Windows Azure Trust Center

Dynamics CRM Online Trust Center

Windows Intune Trust Center

Enjoy!

Top Support Solutions for Microsoft Exchange Server 2013

$
0
0

These are the top Microsoft Support solutions to the most common issues experienced using Microsoft Exchange Server 2013 (updated quarterly).

1. Solutions related to Microsoft Outlook authentication prompt:

2. Solutions related to Outlook user connectivity issues:

3. Solutions related to Outlook Anywhere (RPC over HTTPS):

4. Solutions related to inbound mail from the Internet:

5. Solutions related to Exchange Setup fails or you receive an error message:

Sexta-Feira - Atualização Internacional da Comunidade - Vencedores do TechNet Wiki Day

$
0
0


Muitos não sabem, mas apesar do TechNet Wiki Day ser uma premiação principalmente para artigos Língua Portuguesa, este prêmio já foi entregue para artigos de outros idiomas, com membros de diversos países.

Promovido pela Comunidade Brasileira, esta é a premiação mais antiga do TechNet Wiki em vigor.

É um dos grandes destaques na Comunidade Internacional e foi fonte de inspiração para criação de outras premiações em diversas Comunidades, como: a Francesa e a Turca.

Até mesmo à maior premiação do TechNet Wiki: o TechNet Guru foi moldado em alguns critérios da nossa premiação.

Para valorizar ainda mais o prêmio, vamos apresentar alguns artigos vencedores do TechNet Wiki Day criados em outros idiomas:

A premiação de artigos em outros idiomas é realizado em condições especiais, onde um artigo se destaca no mês para nossa Comunidade.

Esta premiação especial não retira à premiação dos artigos produzidos na Língua Portuguesa.

Os artigos em outros idiomas divulgam e valorizam ainda mais nossa premiação. Então participe e tenha a oportunidade de escrever seu nome entre os melhores.

Crie um artigo no TechNet Wiki Brasil !

Até +,

Wiki Ninja Durval Ramos ( TwitterPerfil )

students involved with technology 2014

$
0
0

January 24th Mr. Chuck D was on the Arsenio Hall show.  Arsenio was discussing his old show "We talk about my old show...I fought for groups like you and NWA and one situation I lost a fight with Paramount because your song was called 'By the Time I Get To Arizona'."

"Tell young people what that song was about..."

Chuck D responded "Everybody got a smartphone so you can Google that...  You can YouTube that...  That's what makes this era different from any other era before - if we are educated enough to stay on top of technology, It can be useful for us and there's no excuse for somebody to say "I don't know" You can research (with) whatever's in your pocket."

Saturday, February 22nd students from all across Illinois will present technical topics to their peers at several schools.  This is the 14th annual conference and the 6th in DeKalb.  It's fun to see so many talented young students show off their passion for technology.

The conference web site is http://sitconference.org - more about this year's event soon...

2014 Winter PowerShell Scripting Games Wrap Up #6

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, wraps up the 2014 Winter Scripting Games and talks about parameter validation.

Weekend Scripter: Parameter Validation

Microsoft Scripting Guy, Ed Wilson, is here. Today, I'm playing with my new Surface Pro 2 device. It came with a year of free Skype and two years of 200 GB storage on OneDrive. WooHoo! It is great. I love that the Surface Pro 2 is so fast and so robust. I was really surprised and happy when the Scripting Wife got it for me. Cool.

Image of logo

     Note  This is the final post in a series in which I talk about things I noticed whilst grading submissions for the
     2014 Winter Scripting Games. In case you missed the previous episodes:

In today’s post, I’ll discuss adding robustness to Windows PowerShell scripts by using parameter validation.

What are parameter validation attributes?

Glenn Sizemore wrote a great introduction to Windows PowerShell parameter validation attributes in his blog post, Simplify Your PowerShell Script with Parameter Validation. It is a good overview. I also talked about using a ValidatePattern parameter attribute in Validate PowerShell Parameters before Running the Script. That post provides a good example of using a regular expression pattern to validate parameter input.

The whole point of using parameter validation attributes is that it does the following:

  • Simplifies Windows PowerShell scripting by using built-in capabilities
  • Makes your Windows PowerShell script easier to read and easier to understand
  • Makes the script more robust by checking permitted input values
  • Simplifies script error handling

About parameter validation attributes

There are a number of parameter validation attributes. They are documented in about_Functions_Advanced_Parameters in the TechNet Library. The following table lists the parameter validation attributes.

Validation Attribute

Meaning

AllowNull

Permits a null value for a Mandatory parameter

AllowEmptyString

Permits an empty string, “”, as a value for a mandatory parameter

AllowEmptyCollection

Permits an empty collection, @() for the value of a mandatory parameter

ValidateCount

Specifies the minimum and the maximum values that a parameter accepts

ValidateLength

Specifies the minimum and the maximum number of characters in a value supplied to a parameter

ValidatePattern

Specifies a regular expression pattern a value must match

ValidateRange

Specifies a range of values for a parameter value

ValidateScript

Code raises an error if code evaluates to “false”

ValidateSet

Specifies that the value for a parameter must be a member of the defined set

ValidateNotNull

Generates an error if the value is null

ValidateNotNullOrEmpty

Generates an error if the parameter value is used in a function call and is null, an empty string or an empty array

 

A practical example

In the following function, the input must be a member of a particular set. The set is defined in a variable called $myset and an if/else construction checks input to see if it is a member of that set. If it is, the input is valid. If it is not, an error message appears, and the permissible values are displayed. This is a useful script, and it is a nice check on the input values.

Function My-Parameters

{

 Param ($myinput)

 $myset = "red","blue","green","orange"

 if($myset -contains $myinput)

    {Write-Output "The input is valid"}

 else

    { throw "The input is not valid. It must be a member of $myset" }

}

After I load the function, I call the function and pass a couple values. The values are members of the $myset set, and therefore, it is valid input. When I pass the value almond, however, an error is thrown. This is shown in the image that follows:

Image of command output

I can rewrite the previous script to use parameter validation. To do this, I use the [ValidateSet()] attribute:

Function Use-ParameterSetValidation

{

  Param(

   [ValidateSet("red","blue","green","orange")]

   [string]$myInput)

   Write-Output "The input is valid"

}

Not only is the script a lot cleaner, but it is much less typing. In addition, the script is easier to read because I do not need to weave through the if/else statement and the throw statement. But more than that, if I run the function in the Windows PowerShell ISE, I also gain IntelliSense. In this way, it makes it very difficult to choose the wrong value. In fact, I have to deliberately ignore the IntelliSense to enter a false value. The following image illustrates the free IntelliSense I gain when I implement the ValidateSet parameter attribute.

Image of command output

Even though I did not wire up a throw statement, if I ignore the IntelliSense and I supply the wrong value for the parameter, an error occurs. In addition, like in my previous example where I displayed the contents of the $myset variable to show the permissible values, Windows PowerShell supplies the permissible values for the set. This is shown in the following image:

Image of command output

Join me tomorrow when I begin a series about using Windows PowerShell with Microsoft Surface RT, Microsoft Surface Pro, and Microsoft Surface Pro 2. It will be some fun stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Technet Wiki Haftalık En Aktif Katkıda Bulunanlar / TOP 10

$
0
0

Merhaba Sevgili Wiki ailesi ve değerli bilişim severler ;

Bir cumartesi top  10 istatiğinde daha Türk Wiki ailesinin yükselişini görüyoruz .Ekibimizin içersinden  üç arkadaşımızın en aktif katkıda bulunanlar arasına girmesi sevindirici.Ayrıca Elguc hocamız liderliğe yükselmesi ayrıca mutluluk verici kendisini tebrik ediyoruz . Ayrıca listeye girmeyi başaran Uğur Demir ve Recep Yüksel arkadaşımızıda tebrik ediyoruz.Listeye göz gezdirdiğimizde Türkiye Wiki ailesinin her alanda yükseldiğini görebiliyoruz.Kısa süre önce kurulan bu topluluk makale sayısında özellikle yükselişe geçmiş durumda.Diğer katkıda bulunan Wiki ailesinin değerli üyelerini de tebrik ediyorum.


1) Elguc Yusufbeyli - TAT

Profile 

Yeni Makale : 56

Makale Düzenleme:  416

Yorum : 102

Uzmanlığı : Active Directory

2) Richard Muller

Profile 

Yeni Makale : 358

Makale Düzenleme:  7378

Yorum : 4911

Uzmanlığı : Active Directory

3)  Ed Price - MSFT

Profile

Yeni Makale : 775

Makale Düzenleme:  23296

Yorum : 8380

Uzmanlığı : SQL Server & Office

4) Peter Geleen -MSFT

Profile

Yeni Makale : 72

Makale Düzenleme: 8464

Yorum : 412

Uzmanlığı : Identity & Access Management

5)  Benoit  Jester - MTFC

Profile

Yeni Makale : 159

Makale Düzenleme: 3619

Yorum : 1175

Uzmanlığı : SharePoint

6)  Carsten Siemens

Profile

Yeni Makale : 67

Makale Düzenleme: 3573

Yorum : 3805

Uzmanlığı : Senior Technical Architect , Translator

7) Sandra Pereira

Profile

Yeni Makale : 51

Makale Düzenleme: 2403

Yorum : 512

Uzmanlığı : .NET, BizTalk ve SOAP/XML/XSLT

8) Maheshkumar S Tiwari

Profile

Yeni Makale : 45

Makale Düzenleme: 4697

Yorum : 3773

Uzmanlığı : EAI/EDITeknolojileri ve BizTalk

9) Uğur Demir - TAT

Profile

Yeni Makale : 135

Makale Düzenleme: 353

Yorum : 215

Uzmanlığı : Exchange Server

10 ) Recep Yüksel - TAT

Profile

Yeni Makale : 19

Makale Düzenleme: 127

Yorum : 36

Uzmanlığı : Active Directory, Exchange Server, System Center

 



Hyper-V 2012 R2 Network Architectures Series (Part 1 of 7) – Introduction

$
0
0

Hi Virtualization gurus,

Since 6 months now, I’ve been working on the internal readiness about Hyper-V Networking in 2012 R2 and all the options and functionalities that exists and how to make them work together and I realize that a common question in our team or from our customers is what are the best practices or the best approaches when defining the Hyper-V Network Architectures of your Private Cloud or your Virtualization farm. Hence I decided to write this series of posts that I think they might be helpful at least to do the brainstorm to find the best approach for every particular scenario. The reality is that each environment is different and use different hardware, but I least I can help you identify 5 common scenarios on how to squeeze the performance of your hardware.

I want to make clear that there I no one right answer or configuration, but your hardware can help you determine the best configuration for a robust, reliable and performer Hyper-V Network Architecture. Also, I want to note that I will do some personal recommendation based on my experience. These recommendations might or might not be the official – generic recommendations from Microsoft, so please contact you support contact in case of any doubt.

The series will contain these post:

1. Hyper-V 2012 R2 Network Architectures Series (Part 1 of ) – Introduction (This Post)

2. Hyper-V 2012 R2 Network Architectures Series (Part 2 of ) - Non-Converged Networks, the classical but robust approach

3. Hyper-V 2012 R2 Network Architectures Series (Part 3 of ) – Converged Networks Managed by SCVMM and Powershell

4. Hyper-V 2012 R2 Network Architectures Series (Part 4 of ) – Converged Networks using Static Backend QoS

5. Hyper-V 2012 R2 Network Architectures Series (Part 5 of ) – Converged Networks using Dynamic QoS

6. Hyper-V 2012 R2 Network Architectures Series (Part 6 of ) – Converged Network using CNAs

7. Hyper-V 2012 R2 Network Architectures Series (Part 7 of ) – Conclusions and Summary

Hyper-V 2012 R2 Network Architectures Series (Part 2 of 7) – Non-Converged Networks, the classical but robust approach

$
0
0

As an IT guy I have the strong believe that engineers understand graphics and charts much better than bullets points and text, so the first thing I will do is to paste the following diagram

clip_image002

At first sight you can recognize from left to right that there are 6 Physical Networks cards used in this example. You can also recognize that two of these adapter on the left are 1GB adapters and the other four green adapters are 10GB adapters. These basic considerations are really important because will dictate how your Hyper-V Cluster nodes will perform.

On top of the 6 Physical Network cards you can see that some of them are using RSS and some of them are using dVMQ. Here is where the things start to become interesting because you might wonder why I don’t suggested to do a big 4 NICs team with the 10GB adapters and dismiss or disable the 1GB adapters. At the end of the day 40GB should be more than enough right?

Well, as PFE, I like stability, high availability and robustness on the Hyper-V environments, but also separate things that have different purposes. Using the approach from the picture above will give me the following benefits:

  • You can use RSS for Mgmt, CSV and LM traffics. This will enable the host to squeeze the 10GB adapters if needed. Remember that RSS and dVMQ are mutually exclusive, so if I want RSS I need to separate Physical NICs
  • Since 2012 R2, LM and CSV can take benefit from SMB Multichannel, so I don’t need to create a Team, especially when the adapters support RSS. CSV and LM will be able to use 10GB each without external dependencies or aggregations on the Physical Switch like LACP
  • CSV and LM Cluster networks will provide enough resilience to my cluster in conjunction with the Mgmt network.
  • Mgmt network will have HA using an LACP team. This is important and possible because each Physical NIC is connected directly to a Physical Switch that can be aggregated by our Network Administrator.
  • Any file copy using SMB between Hyper-V hosts will use the CSV and LM network cards at 10GB because how SMB Multichannel algorithm work. Faster adapters take precedence, so even if it’s a copy over the Mgmt network, I will take benefit of this awesome feature and will send the copy at 20GB (10GB from each CSV and LM adapter)
  • SCVMM will always have a dedicated Mgmt network to communicate with the Hyper-V host for any required operation. So creation or deleting of any Logical Switch will never interrupt the communication between them.
  • You can dedicate two entire 10GB Physical Adapters to my Virtual Machines using a LACP Team and creating the vSwitch on top. dVMQ and vRSS will help my VMs to perform as needed while the LACP /Dynamic Team will allow me to receive and send up to 20GB from my VMs if really required. I have to be honest here and the maximum bandwidth inside a VM that I have seen using this configuration was 12GB, but is not a bad number at all.
  • You can use the SCVMM 2012 R2 to create my logical switch on top and apply any desired QoS to my VMs if needed.
  • You are not mixing Storage IOs with Network IOs

So, as you can see, this setup has a lot of benefits and best practices. Is not bad at all and maybe I already forget some other benefit… but where are the constraints or limitations here with this Non-Converged Network Architecture? Here are some of them:

  • Cost. Not a minor issue for some customer that can’t afford to have 4 10GB adapters and all the network infrastructure that this might require if we want real HA on the electronics.
  • Additional Mgmt effort. This model requires us to setup and maintain 6 NICs and their configurations. It also requires the Network administrator to maintain the LACP port groups on the Physical Switch.
  • More cables in the datacenter.
  • Replica or other Management traffic that is not SMB will only have up to 2GB throughput
  • Enterprise Hardware is going on the opposite direction. Today is more common to see 3 party solutions that multiplex the real adapters in more logical partitions, but let’s talk about it later

Well, maybe I didn’t gave you any new information regarding this configuration, but at least we can see that this Architecture is still a good choice if possible for several reasons. Is up to you and the hardware you have to use this option.

Let’s see you again in my next post where I will talk about Converged Networks Managed by SCVMM and Powershell

Hyper-V 2012 R2 Network Architectures Series (Part 3 of 7) – Converged Networks Managed by SCVMM and PowerShell

$
0
0

Following the same rule from the previous post about Non Converged Network Architectures, let me add the diagram first and elaborate my thoughts from there

clip_image002

As you can see, in this diagram there are only 2 10 GB Physical Network cards. This hardware configuration is becoming more and more popular because 20GB of bandwidth should cover most of the customers needs simplifying the datacenter and reducing their costs. However, this architecture have some caveats that I think is worthy to mention to avoid deception or disappointing with the performance that it can offer.

So let’s start about the good things of this option and left the constraints to the end of this post.

  • Simplified and central management from SCVMM 2012 R2. We can setup almost everything from Virtual Machine Manager 2012 R2 and the powerful Logical Switches options. Hyper-V Host Traffic partitioning, VMs QoS and NIC Teaming will be configured automatically by VMM once our Logical Switch template is applied to the Hyper-V Host
  • QoS based on weight can be applied by VMM 2012 R2 enhancing the resource utilization based on the need and the priority. For example, we might want to guarantee a minimum bandwidth to our CSV and LM vNICs on the Hyper-V host to make sure that Cluster reliability and stability is protected.
  • You can take benefit from SMB Multichannel and use it with the vNICs to provide more bandwidth if needed. Each vNIC will use a VMQ and SMB Multichannel can help to use more CPU Cores to improve network throughput for the same traffic. CSV Redirection or Live Migration network traffics are good examples.
  • You will be able to use LACP if needed if the 2 Physical NICs are connected to a Physical Switch. This might be desired or not depending on your needs. LACP configuration will force you to use a MIN-QUEUE configuration for dVMQ, so if you plan to run more VMs than Queues maybe is not the best decision. You can always use Switch Independent / Hyper-V Port or Dynamic configuration for your TEAM if you don’t expect to need more than 10GB throughput for just one VM.

This architecture seems really handy right? I can manage almost all my network settings from VMM preserving good Network throughput… but let’s list some caveats/constraints:

 

  • You lose RSS for the Hyper-V Host network traffics. This might have a performance impact in your Private Cloud if you don’t properly mitigate the penalty of losing RSS with several vNICs in your host to use SMB Multichannel. For example LM speed can be impacted unless you take benefit from SMB Multichannel.
  • Generally speaking, you have a single point of failure on the TEAM. If for whatever reason the TEAM has a problem, all your network might be compromised impacting the Host and Cluster stability
  • VM deployment from the VMM Library server will have just one core for the queue of the vNIC for management. This can reduce the deployment and provisioning speed. This can be a minor or big issue depending on how often you deploy VMs. One core can handle around 4Gbps throughput so I don’t really see this as a big constraint but something to have in mind.
  • The amount of VMQs available may be a concern if we plan to run a lot of VMs on this Hyper-V host. All the VMs without a queue will interrupt CPU 0 by default to handle their network traffic.
  • We still need to configure VMQ processors settings with Powershell
  • Is impossible to use SR-IOV with this configuration because can’t be teamed at host level.

Hyper-V Converged Network using SCVMM 2012 R2 and Powershell strength is based on the fact that 90% of the configuration required can be done from VMM. However, we may face some performance constraints that we should be aware before making any decision. As a general rule I would say that most of the environments today can offer more performance than what is required but in some cases we will need to think on different approaches.

Stay tuned for part 4!

Tip of the Day: Good Bye VDS, Hello SMAPI

$
0
0

Today’s Tip…

In Windows 8 and Windows Server 2012, we have introduced the Storage Management API (SMAPI), as a means for managing local and array-based storage. SMAPI supersedes the Virtual Disk Service (VDS), which was the previous interface for managing storage in Windows. However legacy VDS tools such as Disk Management and DiskPart are still in the product.

Just as the VDS service utilized the notion of VDS Hardware Providers which were provided by an array vendor to manage storage, the SMAPI utilizes Storage Management Providers (SMPs) as an interface for managing storage.

For host-specific storage objects (Disk, Partition, Volume), the management API allows managing resources with no additional drivers required.

For provider-specific objects (PhysicalDisk, Storage Pool, Virtual Disk, Storage Provider, Storage Subsystem), a Storage Management Provider is required. In the case of Storage Spaces, a SMP is provided in Windows for use with Storage Spaces. For management of 3rd-party arrays, a vendor-provided SMP is required.

clip_image001

More information can be found in the blog, An Introduction to Storage Management in Windows Server 2012

Hyper-V 2012 R2 Network Architectures Series (Part 4 of 7) – Converged Networks using Static Backend QoS

$
0
0

Ok. We have discussed the pros and cons of Non-Converged Networks and Converged Networks with SCVMM 2012 R2 in the last two blogs, but what happens when you already have a third party technology that already provides QoS in the Backend? There are several vendors and partners offering this hardware functionalities but the most common ones are the Virtual Connect with Flex-10 FlexNICs, the Cisco FEX with UCS and the Dell NPAR.

Again, let’s take a look to the following diagram and let’s elaborate what we are seeing:

clip_image002

Starting from the bottom you can see two big purple NICs representing the real uplinks available in the backend. These two uplinks use to be connected to different Physical Switches or Fabrics to provide HA and redundancy on the physical layer, but is not what Windows sees after is configured.

Usually Windows will see 8 NICs and in this diagram we will use only 6 of them because we assume that storage connection is done via FC over HBAs. On the Third Party backend QoS setup, we need to define what networks are required for a good Hyper-V Cluster environment and the bandwidth required for each of these networks.

One possible approach is to multiplex the uplinks with the above configuration. This will have almost the same benefits than a Non-Converged network Architecture except for:

  • LACP is not exposed to Windows, so we cannot aggregate incoming traffic for the Mgmt or the VMs TEAMs
  • On this Static QoS configuration, the bandwidth is hard blocked. Even if it’s not used, it will not be available for the other traffics. This might be a waste of bandwidth and not desirable.
  • Combine the backend QoS with the SCVMM 2012 R2 QoS for the VMs can be difficult to understand and configure.
  • Windows will see Physical NICs with non-standard bandwidths (anything different than 1GB or 10GB)
  • It will require additional configuration and knowledge on how to integrate these backend solutions with Hyper-V.
  • Firmware and drivers from this vendors will be key component of the performance and reliability.

On the other hand, if our environment is really static and we know for sure the bandwidth consumption of all our traffic, this third party solutions can help us to use RSS and dVMQ over the same uplinks. From Windows perspective all the adapters will be physical and depending on the driver and the model you will be able to use both.

This kind of setup was really common until one year ago, when almost any vendor started to offer Dynamic QoS on the backend. Let’s talk about it in the next post.

Viewing all 17778 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>