Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Resources for IT Professionals

older | 1 | .... | 832 | 833 | (Page 834) | 835 | 836 | .... | 889 | newer

    0 0

    2015年秋にリリースをしたMicrosoft Project 2016の操作を学べる無償eラーニングを、協力パートナーであるTIS社のサイトで提供しています。

    Project 2016 無償eラーニングの概要と受講はこちらをご覧ください ➡ https://www.microsoft.com/ja-jp/project/e-larning/default.aspx

    このeラーニングでは、Microsoft Project Standard 2016 の基本操作を、約3時間の学習と演習課題を通じてマスターすることができます。

     

    受講者特典として、eラーニング受講後にTISに申請すると3 PDUが獲得できます。

    ( PDUは、プロジェクトマネジメントの国際団体 PMI本部が認定しているプロジェクトマネジメントに関する認定資格 PMP (Project Management Professional)の、資格維持に必要ポイントです。PMPや資格取得にご興味のある方は、PMI日本支部のページを是非ご覧ください https://www.pmi-japan.org/pmp_license/

    基本は分かったので更にがっつりしたトレーニングを受講したい!という方には、パートナーが提供する有償トレーニングもございます。➡ https://www.microsoft.com/ja-jp/project/training.aspx

     

    さて、基本的な操作をeラーニングでマスターしたら、次は実際にMicrosoft Pro for Office 365の無料トライアル版をお試しください。  https://products.office.com/ja-jp/Project/project-pro-for-office-365


     

    もし現在Excelでプロジェクト管理をされているならば、Microsoft ProjectにはExcelデータをウィザードから簡単にインポートする機能があります。

    Project Pro for Office 365の無償トライアル版を立ち上げ、お手持ちのExcelデータを取り込み、ExcelProject機能の違いを実感していただくと良いかもしれません。

    Projectに Excelデータをインポート方法はこちらのページを参考にしてください。

    https://support.office.com/ja-jp/article/Excel-%25E3%2583%2587%25E3%2583%25BC%25E3%2582%25BF%25E3%2582%2592-Project-%25E3%2581%25AB%25E3%2582%25A4%25E3%2583%25B3%25E3%2583%259D%25E3%2583%25BC%25E3%2583%2588%25E3%2581%2599%25E3%2582%258B-cb3fb91a-ad05-4506-b0af-8aa8b2247119?ui=ja-JP&rs=ja-JP&ad=JP

    また逆に、Project で作成したデータをExcelにエクスポートすることもできます。

    https://support.office.com/ja-jp/article/%e3%83%97%e3%83%ad%e3%82%b8%e3%82%a7%e3%82%af%e3%83%88-%e3%83%87%e3%83%bc%e3%82%bf%e3%82%92-Excel-%e3%81%ab%e3%82%a8%e3%82%af%e3%82%b9%e3%83%9d%e3%83%bc%e3%83%88%e3%81%99%e3%82%8b-ce71a2a4-e9ab-4879-a6f9-19421a70c13d?ui=ja-JP&rs=ja-JP&ad=JP

     

    Projectを実際に操作してみてわからないことがあった場合はコミュニティに質問することができます。

    (”コミュニティで質問する”をクリック)

    https://support.microsoft.com/ja-jp/gp/gp_project_main/ja?wa=wsignin1.0

     無償eラーニングと無償トライアル版、コミュニティを活用して、本格的なプロジェクト管理ツールをご体感ください。

    以上 簡単ですが本日はProject 2016 無償eラーニングと無償トライアル版のご紹介でした。


    0 0

    ~Karan Rustagi

    We see this a lot – people are enthusiastic to get the trial started and the life gets in the way. The good news is that we do have a trial extension process:

    Contact Billing department at numbers listed here - http://onlinehelp.microsoft.com/windowsintune/jj839713.aspx

    Step 1: Choose Option 1

    Step 2: Choose Option 1 again

    Step 3: Choose Option 2

    As long as the current trial is still active, you can request an extension.


    0 0

    Nach einem dreimonatigen Praktikum in der Softwareentwicklung klagte Clemens Conrad über Schmerzen in den Handgelenken. Innerhalb von zwei Wochen waren nacheinander beide Unterarme angeschwollen und Arbeit am Computer nur unter großen Schmerzen möglich. Dann kam die Diagnose: „RSI-Syndrom“. Es dauerte drei Jahre, bis er wieder komplett schmerzfrei arbeiten konnte. Wie ihm das richtige PC-Zubehör am Arbeitsplatz dabei geholfen hat, erzählt er in diesem Blog.


    Diagnose „RSI-Syndrom“

    Völlig unvorbereitet trafen mich kurz nach meinem 20. Geburtstag starke Schmerzen in beiden Unterarmen. Innerhalb weniger Tage war PC-Arbeit kaum noch möglich. Auch ganz alltägliche Dinge, wie zum Beispiel Flaschen zu öffnen oder Auto zu fahren, bereiteten mir große Probleme. Behandlungen durch über zehn Ärzte und Physiotherapeuten brachten kaum Besserung. Ich fand heraus, dass dieses Krankheitsbild in Deutschland fast unbekannt ist. Erst durch eigene Recherche im Internet, Training mit einer erfahrenen Physiotherapeutin und einer ergonomischen Ausstattung des Arbeitsplatzes konnte ich meine Schmerzen überwinden. Drei Jahre dauerte dieser Prozess.

    Das RSI-Syndrom ist eine Sammelbezeichnung für verschiedenartige Schmerzen in Muskeln, Sehnen und Nerven, die meist durch vielfach wiederholte Bewegungen entstehen. Bei Personen, die regelmäßig mehrere Stunden täglich am Computer arbeiten, sind häufig die Unterarme sowie der Nacken- und Schulterbereich betroffen. Im allgemeinen Sprachgebrauch wird oft vom Maus- oder Tennisarm gesprochen.

    Eine leichte Verspannung im Nacken oder in den Unterarmen nach einem harten Arbeitstag ist nichts Besonderes. Diese ab und an auftretenden leichten Beschwerden klingen meist über Nacht wieder ab und man kann am nächsten Morgen ganz normal weiterarbeiten. Problematisch wird es, wenn die Beschwerden stärker werden und auch am nächsten Tag noch spürbar sind. Wird nun nichts an der Ausstattung des Arbeitsplatzes und der Körperhaltung geändert, können die Schmerzen unter Umständen chronisch werden.


    Eigeninitiative am Arbeitsplatz hilft

    Je nach Stärke der Beschwerden sollte anfangs immer erst ein Arzt aufgesucht werden, um andere Krankheiten auszuschließen. Bei starken Beschwerden, die nur auf die intensive Computerarbeit zurückzuführen sind, ist Eigeninitiative gefragt. Durch Änderungen am Arbeitsplatz und im persönlichen Arbeitsverhalten sollte den Beschwerden möglichst frühzeitig begegnet werden. Schauen Sie sich Ihren Arbeitsplatz genau an. In vielen Unternehmen können Sie dies gemeinsam mit dem Betriebsarzt oder der Fachkraft für Arbeitssicherheit tun. Nachfolgend ein paar Tipps:

    • Stellen Sie den Schreibtisch und Bürostuhl auf Ihre Körpergröße ein.
    • Schreiben Sie längere Texte mit Spracherkennung.
    • Schließen Sie auch bei der Arbeit am Laptop eine externe Maus und Tastatur an.
    • Platzieren Sie die Tastatur mittig vor den Monitor.
    • Knicken Sie Ihre Handgelenke beim Tippen nicht nach oben (Tastaturfüße nicht aufstellen).
    • Halten Sie Ihre Ellenbogen dicht am Körper.
    • Lassen Sie Ihre Hände nicht auf Maus oder Tastatur liegen, wenn Sie nur am Bildschirm lesen.
    • Bewegen Sie regelmäßig Ihren Oberkörper und variieren Sie die Sitzhaltung (z.B. Stehpult).
    • Schütteln Sie regelmäßig die Arme kurz aus.
    • Schauen Sie dabei zur Entlastung der Augen auf einen weit entfernten Punkt (nicht Monitor).
    • Treiben Sie regelmäßig Sport als Ausgleich.

     

    Ergonomische Tastaturen beugen vor

    Ergänzend zu diesen Maßnahmen hilft ergonomisches PC-Zubehör, um Schmerzen vorzubeugen. Eine ergonomische Tastatur sollte hierfür die folgenden drei Eigenschaften mitbringen: Ein geteiltes Tastenfeld (linkes Bild) ermöglicht, ein Abknicken der Handgelenke nach außen zu vermeiden. So lässt sich das Risiko für eine Sehnenscheidenentzündung durch langes Arbeiten mit abgewinkelten Handgelenken deutlich verringern. Zudem hilft eine Handballenauflage (mittleres Bild) und evtl. zusätzlich sogar ein nach hinten abfallendes Tastenfeld, um ein Abknicken der Handgelenke nach oben zu vermeiden. Nicht zuletzt ermöglicht ein nach oben gewölbtes Tastenfeld (rechtes Bild), die Arm-Eindrehung zu verringern.


    Ich arbeite aktuell mit dem Microsoft Natural Ergonomic Keyboard 4000, aber auch neuere Modelle wie das Sculpt Ergonomic Desktop erfüllen die angesprochenen Kriterien. Insgesamt ist sind die Tastaturen recht breit, so dass ich mir angewöhnt habe, die Maus mit der linken Hand zu bedienen. Das hat den Vorteil, dass der Arm bei der Mausnutzung nicht so weit nach außen abgespreizt werden muss. Längere Texte schreibe ich meist mit der Hilfe einer zusätzlichen Spracherkennung. Dies entlastet die Hände zusätzlich.


    Lange Tradition von ergonomischem PC Zubehör bei Microsoft 

    Seit vielen Jahren ist Microsoft der führende Anbieter im Hinblick auf ergonomisches Produktdesign. So bietet beispielsweise die aktuelle Sculpt-Produktreihe eine große Auswahl an modernen Mäusen und Tastaturen mit innovativen Features und einem ergonomischen Design. Mehr Informationen zu ergonomischem PC Zubehör von Microsoft gibt es unter https://www.microsoft.com/hardware/de-de



     

     





    Ein Gastbeitrag von Clemens Conrad
    Autor des Buches „RSI-Syndrom, Mausarm, Tennisarm“.
    Auf der dazugehörigen Internetseite http://www.repetitive-strain-injury.de informiert Clemens über Erfahrungen mit RSI und gibt Hinweise zur Vermeidung von Problemen.



    0 0

    Mark your calendar! Tune in to #TechNetVC live Mar 1-3 for all things IT pro: http://msft.it/6019Bk30N

    image


    0 0

    Today’s Tip…

    Do you have specific files or websites you use often? Do you prefer the website over the App? Or maybe have an internal website that you can never remember the URL?  Wanna pin it to the Start Menu in Windows 10? Of course you do! Here are the steps for both Edge and IE.

    clip_image001

    Visit the site, click on the "…".

    clip_image002

    Select Pin to Start. It's that easy.

    clip_image003

    clip_image004

    For files or website that you want to open in IE, you’ll need a shortcut.  Here’s how to do this from scratch. Scroll down for steps on existing items.

    Step 1

    Right click on the desktop and select New -> Shortcut

    clip_image005

    Type in the URL for your site:

    clip_image006

    And a friendly name:

    clip_image007

    There generic shortcut will be on your desktop. Note: You can customize the Icon for the shortcut if you'd like here but not required.

    clip_image008

    Step 2

    Go to Run (right click on Start and select Run or Windows Key + R).

    Type in %appdata%\Microsoft\Windows\Start Menu\

    clip_image009

    Hit enter.  Windows Explorer will open to that location.

    Either drag the newly create shortcut to the folder or create a subfolder under Programs. Copying the link to the root Start Menu folder will mean that items show up on the All Apps list (shown in example). If it’s in a subfolder under Programs, you will find the shortcut in that subfolder on All Apps.  Organization is up to you.

    clip_image010

    Step 3

    Now open the Start Menu and scroll down the All Apps view. You should see your shortcut listed here.

    clip_image011

    From here, right click and select "Pin to Start".

    clip_image012

    clip_image013

    Once on Start, you can resize between small and medium.

    clip_image014

    Existing Links

    If you have an existing Internet Explorer favorite that you want to add to Start, copy the link file from Favorites folder (%userprofile%\Favorites) to %appdata%\Microsoft\Windows\Start Menu\

    clip_image015

    Back on the Start menu, under All Apps, the shortcut can now be pinned.

    clip_image016


    0 0

     

    This is a common practice for rotating old physical servers coming off lease, or when moving VM based management servers to a new operating system. 

     

    There are some generic instructions on TechNet here:  https://technet.microsoft.com/en-us/library/hh456439.aspx   however, these don’t really paint the whole picture of what all should be checked first.  Customers sometimes run into orphaned objects, or management servers they cannot delete because the MS is hosting remote monitoring activities.

    Here is a checklist I have put together, the steps are not necessarily enforced in this order… so you can rearrange much of this as you see fit.

     

    • Install new management server(s)
    • Configure any registry modifications in place on existing management servers for the new MS
    • Patch new MS with current UR to bring parity with other management servers in the management group.
    • If you have gateways reporting to old management servers, install certificates from the same trusted publisher on the new MS, and then use PowerShell to change GW to MS assignments.
    • Inspect Resource pools. Make sure old management server is removed from any Resource pools with manual membership, and place new management servers in those resource pools.
    • If you have any 3rd party service installations, ensure they are installed as needed on new MS (connector services, hardware monitoring add-ons.
    • If you have any hard coded script or EXE paths in place for notifications or scheduled tasks, ensure those are moved.
    • If you run the Exchange 2010 Correlation engine - ensure it is moved to a new MS.
    • If you use any URL watcher nodes hard coded to a management server - ensure those are moved to a new MS. (Web Transaction Monitoring)
    • If you have any other watcher nodes - migrate those templates (OLEDB probe, port, etc.)
    • If you have any custom registry keys in place on a MS, to discover it as a custom class for any reason, ensure these are migrated.
    • If you have any special roles, such as the RMSe - migrate them.
    • Ensure the new MS will host optional roles such as web console or console roles if required.
    • Migrate any agent assignments in the console or AD integration.
    • Ensure you have BOTH management servers online for a considerable time to allow all agents to get updated config - otherwise you will orphan the agents until they know about the new management server.
    • If you perform UNIX/LINUX monitoring, these should migrate with resource pools. You will need to import and export SCX certs for the new management servers that will take part in the pool.
    • If you use IM notifications, ensure the prerequisites are installed on the new MS.
    • Ensure any new management servers are allowed to send email notifications to your SMTP server if it uses an access list.
    • If you have any network devices, ensure the discovery is moved to another MS for any MS that is being removed.
    • If you are using AEM, ensure this role is reconfigured for any retiring MS.
    • If you are using ACS and the collector role needs to be migrated, perform this and update the forwarders to their new collector.
    • If you have customized heartbeat settings for the management server, ensure this consistent.
    • If you have any agentless monitored systems (rare) move their proxy server.
    • If you were running a hardware load balancer for the SDK service connections - remove the old management servers and add new ones.
    • Review event logs on new management servers and ensure there aren't any major health issues.
    • Uninstall old management server gracefully.
    • Delete management server object in console if required post-uninstall.

     

    If you have any additional steps you feel should be part of this list – feel free to comment.


    0 0

    If you have any question about this blog post series or DevOps, feel free to contact me directly on Twitter : http://twitter.com/jcorioland.

    This post is part of the series “VorlonJS – A Journey to DevOps”

    Introduction

    In the previous articles of this series, we discuss about how to use the VSTS build system to do continuous integration to validate that the code builds each time a commit is done on GitHub. This build also generates some artefacts: the application code with its node.js dependencies (modules) and some scripts (in the DeploymentTools folder) that allow to automatically create infrastructure (Infrastructure as Code) on Microsoft Azure and deploy the application.

    As you can see on the previous capture that shows the artefacts produced by the build, there is a deployment-package.zip archive and some JSON and PowerShell scripts.

    The zip archive contains all the application code and its dependencies. It’s the archive that can be deployed on an Azure Web App, using Web Deploy. The JSON scripts are Azure Resource Manager templates that we’ll use to create different environments like development, pre-production or production.There is also a PowerShell script that will be used to delete a deployment slot on the production Azure Web App.

    NB: you can browse these scripts directly on our GitHub repository : https://github.com/MicrosoftDX/Vorlonjs/tree/dev/DeploymentTools

    In this blog post, we will discuss how it is possible to use these artifacts in Visual Studio Team Services Release Management in order to build a release workflow that will automatically deploy the application across different environments.

    Connect your Microsoft Azure subscription to Visual Studio Team Services

    In the same way that you have connected your GitHub account to be able to get the code during the build, you need to connect your Microsoft Azure subscription to the Visual Studio Team Services to deploy the application automatically, using Release Management.

    Go to the settings of the team project using the  button at the top right of the window and then select the “Services” tab. You can add a new connection to you Microsoft Azure subscription:

    You can directly use your Credentials to connect your subscription. To get your subscription id, put your mouse over the info sign and click the “publish settings file” link that appears under the field:

    NB: You do not have an Azure subscription yet? You can create a new one and try it for free during one month following this link.

    Work with release definition

    The first step in using VSTS Release Management is to create a release definition. First, you need to go in the RELEASE* tab.

    NB: the * is because Release Management is in preview at the time I am writing this series.

    Click the  button in the left pane to create a new release definition. You can choose to start with an existing template or from an empty definition:

    In this case, I have started from an empty one. The designer looks like the one you have used to define a new build, but there are few additional steps before starting to define the release workflow.

    First, you need to give a name for this new release definition and link it to a build definition by clicking on the link “Link to a build definition”, under the name field.

    This step is really important because it will allow to use the artefact published by a build to be used in the release workflow. In the case of Vorlon.JS, we linked the continuous integration build to be able to use the deployment package and scripts that where published in the last step of the build. It also allows to trigger the release each time the build is completed successfully, if you want deploy the application continuously.

    Now, you can define the different environments where the application will be released. In this case, we have three environments: Development, Pre-Production and Production. To add a new environment, click on the “Add environments” button a name to each environment you have added:

    NB: if the steps are the same for each environment, you can define the first one and then clone it from the “…” icon menu:

    NB: the “Save as template…” entry allows to create a template from this environment definition. The template will be available when you create a new release definition / environment.

    For each environment, you can define a set of tasks that will be executed. Like for the build system described in a previous post in this series, there are a lot of predefined tasks that you may use. To add a new Task in a given environment, click on the  button:

    For example, you can choose to use the task “Azure Resource Group Deployment” that allows to create or update a new Resource Group from an Azure Resource Manager JSON template or “Azure Web App Deployment” that allows to deploy a web application using Web Deploy. There are also task that will help you to handle deployment using external tools like Chef or Docker, for example.

    The screen below details the different steps we have configured for releasing Vorlon.JS in the Development environment:

    The first step will create or update a resource group named “VorlonJS-Development-RG” using the deployment templates published by the Continuous Integration Build. You can directly browse the artefacts published during the build and choose the files you want to use by clicking on the “…” button :

    The second step publishes the application in the Azure Web App that was created in the first step, using the deployment-package.zip archive created during the build:

    The third step update the configuration of the Azure Web App using an Azure Resource Manager template.

    You can also use the “Azure Resource Group Deployment” task to remove a resource group. In the Vorlon.JS release workflow, we wanted to dispose the infrastructure as soon as the environment is not used anymore. So, the first step of the Pre-Production release workflow is to remove the Development environment. We also remove the Pre-Production environment as soon as the deployment to the production environment is started:

    In this way, we can optimize our usage of Microsoft Azure and do not keep unused application / infrastructure running.

    Once you have configured all your environments and all the release steps, click the  button in the toolbar to save the release definition.

    Assign approvers

    Each environment represents a group of steps in your release workflow. Because you may want to have some control between the transition from one environment to the other, you can assign approvers before the deployment (pre) and after the deployment (post).

    In the case of Vorlon.JS we have configured approvers before deploying to the pre-production and the production environments. To do that, click on the “…” icon of the environment you want configure and choose “Assign approvers…”:

    In this case, the deployment to the Pre-Production environment will not be started until me and Stéphane approve the Development deployment. This feature is really important in the case you have different teams working on each environment.

    For example, you may have the developers’ team that validate the Development environment to make sure all is ok before sending the app to Pre-Production. Then a QA/Testers’ team that will execute some UI or load tests on the Pre-Production environment, before starting the deployment to the Production environment with the approval of a product owner.

    Configure Continuous Deployment

    Another cool feature of Release Management is Continuous Deployment. If you want, you can tell VSTS to create a new release each time the linked build is completed successfully. To do that, go in the Triggers tab and check Continuous Deployment. You need to choose the build and the target environment:

    Create a new release

    Once you are done with your release definition, you can test it by creating a new release. To do that, click on the  button in the toolbar and choose Create Release:

    Click Create and wait for the release to start.

    Get the status of a release

    You can get the status of a release by double-clicking on its line in the Releases table. On the summary tab, you have an overview of the status of the release:

    Here, the release has succeeded in the Development environment and an approval is pending to continue to the Pre-Production environment. It’s possible to approve or reject the release by clicking on the link in the yellow bar:

    If you are satisfied by the application deployed in the first environment click Approve, else you can Reject the release (the other environments will not be deployed!).

    The Logs tab allows you to get detailed information about each step of the release:

    You can also have a pig picture of a given release, by clicking on its name in the left pane and the on the Overview tab:

    Here, you can see that the Release #42 has been successful in the Development environment, is pending in the Pre-Production environment and that the Release #40 has been rejected in the Production environment.

    Conclusion

    You can now implement release management for your own applications and use it to improve the way you deliver new features to your customers, automatically. But, deploying an application in the production environment is not the last step of its lifecycle! Now, you need to get information about this application. You may want to get some diagnostic or usage information to be able to improve it continuously.

    There are several tools that will help you to get this kind of information and that we have used on Vorlon.JS. We will describe them in the next posts!

    Stay tuned!

    If you have any question about this blog post series or DevOps, feel free to contact me directly on Twitter : http://twitter.com/jcorioland.


    0 0

    Configuration Manager clients cannot download package content from a Cloud Distribution Point if the local Distribution Point has the BranchCache feature enabled. In this scenario, errors resembling the following are logged in the ContentTransferManager.log file:

    CTM job {job_guid} successfully processed download completion.
    Failed to decrypt C:\Windows\ccmcache\{package_filename}

    A supported hotfix is available from Microsoft to address this problem. For complete details and a download link, please see the following:

    3120338 - FIX: Content can’t be downloaded from Cloud-Based Distribution Points System Center 2012 Configuration Manager Service Pack 2 when BranchCache is enabled (https://support.microsoft.com/en-us/kb/3120338)

    J.C. Hornbeck | Solution Asset PM | Microsoft

     fbTwitterPic

    Our Blogs


    0 0

    The SQL Server engineering team is pleased to announce the immediate availability of SQL Server 2016 public preview release CTP 3.3. This release advances the “Cloud First” tenet; the build has already been deployed to SQL Azure Database worldwide.

    To learn more about the release, visit the SQL Server 2016 preview page. To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the preview or try the preview by using a virtual machine in Microsoft Azure and start evaluating the impact these new innovations can have for your business.

    Questions? Join the discussion of the new SQL Server 2016 capabilities at MSDN and StackOverflow. If you run into an issue or would like to make a suggestion, you can let us know using Microsoft’s Connect tool. We look forward to hearing from you.

    New Stretch Database improvements in CTP 3.3 include:

    • Support to enable TDE on a remote DB if the local database has TDE enabled
    • Azure Stretch database edition preview with support for up to 60TB
    • Alter and drop index support for stretch tables
    • Add, alter and drop columns support for stretch tables
    • Point-in-time restore and geo-failover support
    • Query performance improvement

    SQL Server Management Studio improvements in this release include:

    • Additional Wizard features:
      • Added new SQL db credential management functionality
      • Integrated Table validation and selection updates to prevent stretch of unsupported datatypes at selection time
      • Table search functionality for table select page
      • Table selection column reordering
      • Support for temporal tables during table select
      • Integrated Azure sign in and SQL sign in credential
      • Add support for stretching using federated accounts
      • New firewall configuration and subnet detection functionality
      • Updated summary page details with pricing information
      • Improved SSMS visualization with StretchDB icons
    • Object Explorer:
      • Fly out menu updates to support disable and un-migration functionality
      • Un-migrate support functionality at database and table level

    Read the SSMS blog post to learn more.

    CTP3.3 adds support with In-Memory OLTP for:

    • Automatic update of statistics on memory-optimized tables: The statistics for memory-optimized tables are now updated automatically, removing the need for running maintenance tasks that update statistics manually.
    • Sampled statistics for memory-optimized tables: Sampling of statistics for the data in memory-optimized tables is now supported, alongside the previously supported fullscan statistics. This reduces the time it takes to collect statistics for large tables.
    • Use of LOB types varchar(max), nvarchar(max), and varbinary(max) with built-in string functions (‘+’, len, ltrim, rtrim and substring) in natively compiled modules, and as return type of natively compiled scalar UDFs.
    • Memory-optimized tables with row size > 8060 bytes, using non-LOB types. CTPs 3.1 and 3.2 supported larger rows using LOB types; as of CTP3.3, memory-optimized tables support also larger rows using types varchar(n), nvarchar(n) and varbinary(n). See below for an example.
    • The OUTPUT clause can now be used with INSERT, UPDATE and DELETE statements in natively compiled stored procedures.

    Autostats improvements in CTP 3.3

    Previously, statistics were automatically recalculated when the change exceeded a fixed threshold. As of CTP 3.3, we have refined the algorithm such that it is no longer a fixed threshold, but in general will be more aggressive in triggering statistics scans, resulting in more accurate query plans.

    Foreign Key Support enhancements in CTP 3.3

    SQL Server 2014 and earlier versions have limitations on the number of FOREIGN KEY references a table can contain, as well as the maximum number on incoming foreign key REFERENCES. The documented recommended maximum is 253, and when performing DML on tables with large numbers of incoming REFERENCES, statements time out with stack overflow error messages.

    This improvement increases the number of supported incoming foreign key REFERENCES to a table, while maintaining good performance for DML operations in both the referencing and the referenced table. The new maximum is 10,000. However, with the CTP 3.3 release, we have certain limitations on this feature:

    • We ONLY support Delete DML operation on foreign key references that go beyond the current recommended maximum of 253. Therefore, we will validate that no referencing rows exist before deletion.
    • Update and Merge operations are not supported with this release. Update will be available in RTM.
    • You will not see any change in behavior for cascading actions.
    • This is not available in ColumnStore, Hekaton or StretchDB.
    • This change is not applicable to a primary key table that is self-referencing (that is, if the table has a foreign key to itself). In this case, the behavior would remain the same as before.
    • This is not supported for partitioned foreign key tables for CTP 3.3. However, partitioned tables will be supported in RTM.

    SQL Server Analysis Services (SSAS) includes multiple additions in this release. Read the SSAS CTP 3.3 blog post to learn more.

    SQL Server Reporting Services (SSRS) includes an updated preview of its brand-new web portal with additional functionality:

    • Add the KPIs and reports that matter most to you to your Favorites and view them all in one place.
    • Manage shared data sources for your KPIs and reports and perform other management tasks.

    Read the SSRS blog post to learn more.

    Master Data Services (MDS) improvements in this release include:

    • Business rule changes
      • New, easier-to-use web UI administration page
      • Support for NOT conditional operator
      • Support for ELSE section that contains a set of actions to execute if the IF condition is false
      • Removed management UI from Excel add-in
    • Added support for purging (hard-deleting soft-deleted members) of an entity version
    • Added to the web explorer page a button to open the current entity view in the Excel add-in


    0 0

    The SQL Server 2016 Community Technology Preview (CTP) 3.3 is now available for download! In SQL Server 2016 CTP 3.3, part of our new rapid preview model, we made enhancements to several features which you can try in your development and test environments.

    In SQL Server 2016 CTP 3.3, available for download or in an Azure VM today, some of the key improvements include:

    • Continued enhancement of Stretch Database: Stretch Database allows you to stretch operational tables in a secure manner into Azure for cost-effective historic data availability. CTP 3.3 includes multiple improvements to Stretch Database, including Azure Stretch database edition preview with support for up to 60TB, Point-in-time restore and geo-failover support.
    • Enhancements to In-Memory OLTP: In-Memory OLTP, which dramatically improves transaction processing performance, has added support in CTP 3.3.
    • Enhancements to Analysis Services DirectQuery models: Analysis Services Tabular Models running in DirectQuery mode now also allows us of DAX filters when defining roles and creation of calculated columns.
    • Enhancements to the new Reporting Services web portal: An updated preview of the new web portal now enables you to add the KPIs and reports you use to your Favorites, to create and edit shared data sources for your KPIs and reports, and to perform other management tasks.

    For additional detail, please visit the detailed CTP 3.3 technical overview blog post, CTP 3.3 Analysis Services blog post, CTP 3.3 Management Studio blog post, and SQL Server 2016 CTP 3.3 Reporting Services blog post.

    Download SQL Server 2016 CTP 3.3 preview today!

    As the foundation of our end-to-end data platform, SQL Server 2016 is the biggest leap forward in Microsoft's data platform history with real-time operational analytics, rich visualizations on mobile devices, built-in advanced analytics, new advanced security technology, and both on-premises and in the cloud.

    To learn more about the release, visit the SQL Server 2016 preview page. To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the preview or try the preview by using a virtual machine in Microsoft Azure and start evaluating the impact these new innovations can have for your business.

    Questions? Join the discussion of the new SQL Server 2016 capabilities at MSDN and StackOverflow. If you run into an issue or would like to make a suggestion, you can let us know on Connect. We look forward to hearing from you!


    0 0

    This is a blog post of a new ongoing series of consolidated updates from the Cloud Platform team.

    In today’s mobile first, cloud first world, Microsoft provides the technologies and tools to enable enterprises to embrace a cloud culture. Our differentiated innovations, comprehensive mobile solutions and developer tools help all of our customers realize the true potential of the cloud first era.

    You expect cloud-speed innovation from us, and we’re delivering across the breadth of our Cloud Platform product portfolio. Below is a consolidated list of our latest releases to help you stay current, with links to additional details if you’d like more information. In this update:

    • Availability of OMS Extension for Linux VMs | Public - Extension Only
    • Azure IoT Hub | Price increase due to GA
    • Azure Premium Storage GA - new geo-availability | East Asia & AU SE- MEL
    • Azure Site Recovery - US Gov | GA
    • DV2 Price Drops | GA
    • Key Vault | GA
    • Power BI Content Pack - February | Troux GA & Azure Search GA
    • Power BI publish to web
    • Power BI gateway - enterprise | GA
    • Power BI mobile apps for iOS and Android update | GA
    • Power BI Windows 10 Universal app | GA
    • SQL Server 2016 | CTP 3.3
    • Virtual Machines and Cloud Services - D-Series - US Gov | GA
    • Visual Studio Code Extension for Apache Cordova | Public Preview
    • Visual Studio Dev Essentials: Azure Credit | GA
    • Visual Studio Dev Essentials: Xamarin University | GA

    Availability of OMS Extension for Linux VMs | Public - Extension Only

    Ease of onboarding is critical to for massive deployments especially when they are living in Cloud. Azure today, will provide an Operations Management Suite’s extension for Linux machines. This capability will enable Azure users to simply onboard to the Log Analytics service from the Azure management portal. Microsoft Operations Management Suite leverages the log analytics and automation capabilities across physical and virtual machines, using Windows or Linux, in the cloud or on-premises, for seamless IT operations. Now in public preview, this feature is included in Microsoft Operations Management Suite.

    Azure IoT Hub | Price increase due to GA

    Microsoft Azure IoT Hub Generally Available as of February 3, 2016.

    Microsoft Azure IoT Hub provides a secured way to connect, provision, monitor, update, and send commands to devices. Azure IoT Hub enables companies to control millions of Internet of Things (IoT) assets running on a broad set of operating systems and protocols to jumpstart their IoT projects.

    Azure IoT Hub helps enable companies to:

    • Establish reliable bi-directional communication with IoT assets, even if they’re intermittently connected, so companies can analyze incoming telemetry data and send commands and notifications as needed
    • Enhance security of IoT solutions by leveraging per-device authentication to communicate with devices with the appropriate credentials
    • Revoke access rights to specific devices, if needed, to maintain the integrity of the system
    • Monitor device connectivity to identify security threats and operational issues as soon as possible
    • Work with common protocols such as HTTP, AMQP, AMPQ over web sockets and MQTT

    Azure IoT Hub is available as a standalone service or as one of the services used in Microsoft Azure IoT Suite.

    General availability pricing will go into effect on April 1, 2016. For details and pricing for Azure IoT Hub please visit the pricing page.

    Azure Premium Storage GA - new geo-availability | East Asia & AU SE- MEL

    Azure Premium Storage generally available in additional regions.

    Azure Premium Storage is a solid-state drive (SSD)–based storage solution designed to support I/O-intensive workloads. With Premium Storage, you can add up to 64 TB of persistent storage per virtual machine (VM), with more than 80,000 IOPS per VM and extremely low latencies for read operations.

    Offering a service-level agreement (SLA) for 99.9 percent availability, Premium Storage is now available in the East Asia and Australia Southeast region, as well as these previously announced regions.

    Learn more about Premium Storage.

    Azure Site Recovery - US Gov | GA

    Azure Site Recovery is generally available.

    Site Recovery

    Azure Site Recovery provides you with full-featured disaster recovery that is simple, and provides automated protection and replication of your physical and virtual environment. Some must-have advantages Site Recovery furnishes for you are:

    • Ability to help quickly protect critical applications running on physical and virtual infrastructures—including Hyper-V and VMware.
    • Replicate and failover your on-premises applications to Azure, negating the need to build and manage a second datacenter for recovery. Reduce expenses and only pay for compute when you need it.
    • Detect and stage multi-tier applications and restore them as a group, with specified startup ordering and the ability to insert scripts to bypass the need for manual configurations.
    • Perform periodic DR drills and testing without any impact to the production or recovery virtual machine.
    • Have a secured, orderly recovery of services in the event of a site outage
    • Continuous visibility with remote health monitoring and extensibility

    The addition of Site Recovery as part of our Azure Backup and disaster recovery features help meet a Government agency’s security rigor and requirements, as well as help meet your hybrid cloud objectives. More updates will become available in the near future as we enable physical Linux and VMWare Linux VM replication scenarios to Azure Storage and/or a secondary datacenter.

    For more information on how this service helps extend and protect your datacenter, and enables your hybrid cloud, please visit the Site Recovery documentation page.

    For more information on deployment scenarios, workload guidance, and features/requirements, please visit the What is Site Recovery? documentation page.

    For more information on best practices, please visit the Prepare for Azure Site Recovery deployment documentation webpage.

    DV2 Price Drops | GA

    Effective early February 2016, prices for the Dv2-Series for Azure Virtual Machines and Azure Cloud Services will be reduced by 10 to 17 percent, depending on the instance and operating system. These are the latest version of the popular D-Series VMs, and they have CPUs that are 35 percent faster, on average, and are based on the Intel Xeon (Haswell) E5 processors. The price reductions are in the following Azure regions: Australia East, Australia Southeast, Central US, East Asia, East US, East US 2, Japan East, North Central US, North Europe, South Central US, Southeast Asia, West Europe, and West US.

    For more details, please read the blog post to learn more.

    Key Vault | GA

    Azure Key Vault is generally available.

    Key Vault

    Government agencies using Azure Key Vault will be able to safeguard cryptographic keys and secrets used by cloud applications and services, enhance data protection and compliance:

    • Secure key management is essential to helping protect government data in the cloud.
    • Store keys and small secrets like passwords protected by hardware security modules (HSMs).
    • For added assurance, keys can be stored in HSMs that are validated to FIPS 140-2 Level 2 standard (hardware and firmware).
    • Key Vault is designed so Microsoft doesn’t see or extract your keys.
    • Reduce latency with cloud scale and global redundancy.
    • Protect your cloud application secrets (passwords, connection string, storage account key, SSL certificates etc.) and reduce the risk of them leaking out via source code or configuration files proliferation.

    With Key Vault, there's no need to provision, configure, patch, and maintain HSMs and key management software—you maintain all of the control, but we do the work.

    • You can provision new vaults and keys (or import keys from your own HSMs) in minutes and centrally manage keys, secrets, and policies.
    • You maintain control over your keys—simply grant permission for your own and third-party applications to use them as needed.
    • Applications do not have direct access to keys.
    • Developers easily manage keys used for Dev/Test and migrate seamlessly to production keys managed by secured operations.

    Key Vault becomes the single place to manage keys and secrets for your important Microsoft first-party services, for third-party services seeking to offer higher assurances, and for your own custom line of business (LOB) Azure-hosted applications. For example, you can:

    • Use Key Vault to store master key(s) for SQL Server with Transparent Database Encryption (TDE).
    • Use client-side Storage SDK to encrypt data using encryption keys stored in Key Vault.

    Note there are two tiers: Standard and Premium. Only the Premium tier supports creating HSM-protected keys.

    For more examples of what you can do with Key Vault, please visit the What is Azure Key Vault? documentation page.

    For more information on how to get started and links to refresh your PowerShell with new Key Vault cmdlets, please visit the Get started with Azure Key Vault documentation page.

    Power BI Content Pack - February | Troux GA & Azure Search GA

    Power BI continues to make it easier for users to connect to their data, providing pre-built solutions for popular services as part of the experience. This month we added content packs for Azure Search and Troux.

    Azure Search for Power BI allows you to monitor and understand the traffic to your Azure Search service. The Azure Search content pack for Power BI provides detailed insights on your Search data, including Search, Indexing, Service Stats and Latency. Connecting Power BI to your Azure Search analytics storage is easy and the analytics will include metrics from the last 30 days for you to explore.

    Troux enables businesses to manage technology investments and ensure the greatest impact while minimizing risk. The Power BI content pack allows you to visualize your applications, technologies and capabilities of your current system as well as analysis over time.

    Subscribers to these supported services can now quickly connect to their account from Power BI and see their data through live dashboards and interactive reports that have been pre-built for them. Getting started with data visualization and analysis has never been easier, for additional information visit the Power BI blog.

    Power BI publish to web

    We are announcing the public preview of the exciting new Microsoft Power BI publish to web capability. Microsoft Power BI publish to web allows you to tell compelling stories with interactive data visualizations in minutes.

    • Easily embed interactive Power BI visualizations in your blog and websites or share the stunning visuals through your emails or social media communications.
    • Reach millions of users on any device, anywhere.
    • Edit, update, refresh or un-share visuals with ease.

    Real-life industry usage and reference examples, videos, demos, and supporting blog posts and details are now publicly available.

    Power BI gateway - enterprise | GA

    Microsoft Power BI gateway for enterprise deployments are generally available.

    With Power BI gateways, you can keep your data fresh by connecting to on-premises data sources. The gateways provide the flexibility you need to meet individual needs, and the needs of your organization. The Power BI gateway for personal use is designed for use with personal data sets and allows you to refresh your on-premises data quickly without waiting for an IT Admin. In addition to the Power BI gateway for personal use, we are now releasing the Power BI gateway for enterprise deployments. This gateway is used by organizations to serve a large number of users.

    Please visit the Power BI blog to learn more about this release or download and learn more about the Power BI gateways here.

    Power BI mobile apps for iOS and Android update | GA

    This massive mobile update contains major improvements that will help enhance the Power BI mobile experience making it slicker, faster and more fun.

    Among the features added we have Real Time Data, R-Tiles, Bing Dashboards and several data consumption improvements.

    All these new capabilities and improvements are already available on the Power BI app for Windows 10 Mobile announced just before the end of 2015 and are now introduced also to iOS and Android.

    Learn more about the update on the Power BI blog.

    Power BI Windows 10 Universal app | GA

    The Power BI app for Windows 10 becomes available as a Windows 10 Universal app that can be installed and run on PCs, tablets and phones.

    The Power BI universal app for Windows 10 offers a touch-optimized consumption experience for Power BI. With this app you can easily take your data on the go. The great advantage of being a Windows 10 universal app makes Power BI look great on different form factors, starting from a small screen device like your phone or your tablet all the way up to the Microsoft Surface Hub, including Continuum support for Windows Phone.

    SQL Server 2016 | CTP 3.3

    SQL Server 2016 Community Technology Preview (CTP) 3.3 was released earlier this week. In SQL Server 2016 CTP 3.3, part of our new rapid preview model, we have made enhancements to several features customers can try in their SQL Server 2016 development and test environments. In SQL Server 2016 CTP 3.3, customers will see advancements in several areas, including enhancements to Stretch Database, In-Memory OLTP, Analysis Services DirectQuery models, and Reporting Services web portal.

    Set up a SQL Server 2016 CTP preview test environment today! To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the preview or try the preview by using a virtual machine in Microsoft Azure and start evaluating the impact these new innovations can have for your business.

    Virtual Machines and Cloud Services - D-Series - US Gov | GA

    D-Series for Azure Virtual Machines, Azure Cloud Services and web/worker roles.

    Virtual Machines

    There is now expanded support for United States Government agencies with a new series of virtual machine (VM) sizes for Azure Virtual Machines and web/worker roles. The D-Series sizes offer up to 112 GB in memory with compute processors that are approximately 60 percent faster than our A-Series VM sizes (relative to the A1-A7 VM sizes). Even better, these sizes have up to 800 GB of local solid-state drives (SSDs) for blazingly fast disk read/write.

    The new sizes offer an optimal configuration for running workloads that require increased processing power and fast local disk input/output (I/O). These sizes are available for both Virtual Machines and Azure Cloud Services.

    In meeting US government requirements, this expanded support houses all customer data, applications, and hardware in the continental United States.

    Visual Studio Code Extension for Apache Cordova | Public Preview

    We are pleased to announce new ways to build, debug and preview Cordova apps using your favorite light-weight text editor (it's your favorite, right?).

    With this extension, you can debug hybrid apps, find Cordova-specific commands in the Command Palette, and use IntelliSense to browse objects, functions, and parameters. You can use it with both “stock” versions of the Apache Cordova framework and downstream frameworks like Ionic, Onsen, PhoneGap and SAP Fiori Mobile Client. Because they all use the same Cordova build systems and core runtime, the TACO extension is adaptable to the JavaScript framework of your choice.

    In fact, you can even use Visual Studio Code on a project created with the full Visual Studio IDE. For example, imagine creating a Cordova project using Ionic templates with Visual Studio on a Windows machine, then opening it on an OS X or Linux machine using Visual Studio Code—making some edits—then continuing your work in the Visual Studio IDE. No matter which editor you choose, you get the full benefit of debugging, IntelliSense, and language support. How cool is that?

    Visual Studio Code + Cordova Tools currently support debugging apps on emulators, simulators, and tethered devices for Android and iOS. If you ask for it (find us on Twitter), Windows support will follow not too far behind. You can also attach the debugger to an app that is already running on a device; the debugger simply uses the application ID to locate the running instance.

    Visual Studio Dev Essentials: Azure Credit | GA

    As a Visual Studio Dev Essentials member, you’ve got the power of the cloud at your command. Azure’s integrated tools, pre-built templates, and managed services make it easier to build and manage enterprise, mobile, web, and Internet of Things (IoT) apps faster, using skills you already have and technologies you already know.

    Use your credit on popular Azure services, including Virtual Machines, Web Apps, Cloud Services, Mobile Services, Storage, SQL Database, Content Delivery Network, HDInsight, and Media Services. All pricing for services uses the standard pay-as-you-go rates shown on the Pricing Calculator.

    Here are some ways you can use your $25 every month:

    • Provision and use one S0 Standard Azure SQL Database
    • Use one D2 Windows virtual machine for 95 hours
    • Analyze big data for 24 hours with an A3 Hadoop Insight Cluster
    • Run five Linux D11 virtual machines for 24 hours

    Activate your $25 monthly credit today!

    Visual Studio Dev Essentials: Xamarin University | GA

    Now, members of Visual Studio Dev Essentials have access to Xamarin University Mobile Training. This benefit equips Dev Essentials members with Xamarin University class recordings and materials from Xamarin Mobile Fundamentals Course.

    The initial course line-up includes the following lectures, and all the associated materials including interactive lab projects:

    • Intro to iOS 101 and 102
    • Intro to Android 101 and 102
    • Intro to Xamarin.Forms
    • Two guest lectures from industry luminaries (Azure Mobile Services and Intro to Prism)

    Activate your Xamarin University Mobile Training through Visual Studio Dev Essentials now!

    Learn more here.


    0 0

    Microsoft is currently working on a new authentication and authorization model that is simply referred to as “v2” in the docs that they have available right now. I decided to spend some time taking a look around this model and in the process have been writing up some tips and other useful information and code samples to help you out as you start making your way through it. Just like I did with other reviews of the new Azure B2B and B2C features, I’m not going to try and explain to you exactly what they are because Microsoft already has a team of humans doing that for you. Instead I’ll work on highlighting other things you might want to know when you actually go about implementing this.

    One other point worth noting – this stuff is still pre-release and will change, so I’m going to “version” my blog post here and I will update as I can. Here we go.

    Version History

    1.0 – Feb. 3, 2016

    What it Does and Doesn’t Do

    One of the things you should know first and foremost before you jump in is exactly what it does today, and more importantly, what it does not. I’ll boil it down for you like this – it does basic Office 365: email, calendar and contacts. That’s pretty much it today; you can’t even query Azure AD as it stands today. So if you’re building an Office app, you’re good to go.

    In addition to giving you access to these Microsoft services, you can secure your own site using this new model. One of the other terms you will see used to describe the model (besides “v2”) is the “converged model”. It’s called the converged model really for two reasons:

    • You no longer need to distinguish between “Work and School Accounts” (i.e. fred@contoso.com) and “Microsoft Accounts” (i.e. fred@hotmail.com). This is probably the biggest achievement in the new model. Either type of account can authenticate and access your applications.
    • You have a single URL starting point to use for accessing Microsoft cloud services like email, calendar and contacts. That’s nice, but not exactly enough to make you run out and change your code up.

    Support for different account types may be enough to get you looking at this. You will find a great article on how to configure your web application using the new model here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-v2-devquickstarts-dotnet-web/. There is a corresponding article on securing Web API here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-webapi-dotnet/.

    Once you have that set up, you can see here what it looks like to have both types of accounts available for sign in:

    v2App_ConvergedSignIn

    The accounts with the faux blue badges are “Work or School Accounts” and the Microsoft logo account is the “Microsoft Account”. Equally interesting is the fact that you can remain signed in with multiple accounts at the same time. Yes! Bravo, Microsoft – well done.  :-)

    Tips

    Knowing now what it does and doesn’t do, it’s a good time to talk about some tips that will come in handy as you start building applications with it. A lot of what I have below is based on the fact that I found different sites and pages with different information as I was working through building my app. Some pages are in direct contradiction with each other, some are typos, and some are just already out of date. This is why I added the “version” info to my post, so you’ll have an idea about whether / when this post has been obsoleted.

     

    Tip #1 – Use the Correct Permission Scope Prefix

    You will see information about using permission scopes, which is something new to the v2 model. In short, instead of defining the permissions your application requires in the application definition itself as you do with the current model, you tell Azure what permissions you require when you go through the process of acquiring an access token. A permission scope looks something like this: https://outlook.office.com/Mail.Read. This is the essence of the first tip – make sure you are using the correct prefix for your permission scope. In this case the permission prefix is “https://outlook.office.com/”. There are still multiple documents out there though that say for the v2 model you should use the prefix “https://graph.microsoft.com/”. Do not do this. You will get an access token if you do this. However, when you try and use it to access a v2 service endpoint you will get a 401 unauthorized response.

    Also…I found in one or more places where it used a prefix of “http://outlook.office.com/”; note that it is missing the “s” after “http”. This is another example of a typo, so just double check if you are copying and pasting code from somewhere.

     

    Tip #2 – Not All Permission Scopes Require a Prefix

    After all the talk in the previous tip about scope prefixes, it’s important to note that not all permission scopes require a prefix. There are a handful that work today without a prefix: “openid”, “email”, “profile”, and “offline_access”. To learn what each of these permission scopes grants you can see this page: https://azure.microsoft.com/en-us/documentation/articles/active-directory-v2-compare/#scopes-not-resources.  One of the more annoying things in reviewing the documentation is that I didn’t see a simple example that demonstrated exactly what these scopes should look like. So…here’s an example of the scopes and how they’re used to request a code, which you can use to get an access token (and refresh token if you include the “offline_access” permission – see, I snuck that explanation in there). Note that they are “delimited” with a space between each one:

    string mailScopePrefix = “https://outlook.office.com/”;

    string scopes =
    “openid email profile offline_access ” +
    mailScopePrefix + “Mail.Read ” + mailScopePrefix + “Contacts.Read ” +
    mailScopePrefix + “Calendars.Read “;

    string authorizationRequest = String.Format(             “https://login.microsoftonline.com/common/oauth2/v2.0/authorize?response_type=code+id_token&client_id={0}&scope={1}&redirect_uri={2}&state={3}&response_mode=form_post&nonce={4}”,
    Uri.EscapeDataString(APP_CLIENT_ID),
    Uri.EscapeDataString(scopes),
    Uri.EscapeDataString(redirUri),
    Uri.EscapeDataString(“Index;” + scopes + “;” + redirUri),
    Uri.EscapeDataString(Guid.NewGuid().ToString()));

    //return a redirect response
    return new RedirectResult(authorizationRequest);

    Also worth noting is that the response_type in the example above is “code+id_token”; if you don’t include the “openid” permission then Azure will return an error when it redirects back to your application after authentication.

    When you log in you see the standard sort of Azure consent screen that models the permission scopes you asked for:

    v2App_WorkAccountPermsConsent

    Again, worth noting, the consent screen actually will look different based on what kind of account you’ve authenticated with. The image above is from a “Work or School Account”. Here’s what it looks like when you use a “Microsoft Account” (NOTE: it includes a few more permission scopes than the image above):

    v2App_OutlookAccountPermsConsent

    For completeness, since I’m sure most folks today use ADAL to get an access token from a code, here’s an example of how you can take the code that was returned and do a POST yourself to get back a JWT token that includes the access token:

    public ActionResult ProcessCode(string code, string error, string error_description, string resource, string state)

    {
    string viewName = “Index”;

    if (!string.IsNullOrEmpty(code))
    {

    string[] stateVals = state.Split(“;”.ToCharArray(),

    StringSplitOptions.RemoveEmptyEntries);

    viewName = stateVals[0];
    string scopes = stateVals[1];
    string redirUri = stateVals[2];

    //create the collection of values to send to the POST
    List<KeyValuePair<string, string>> vals =
    new List<KeyValuePair<string, string>>();

    vals.Add(new KeyValuePair<string, string>(“grant_type”, “authorization_code”));
    vals.Add(new KeyValuePair<string, string>(“code”, code));
    vals.Add(new KeyValuePair<string, string>(“client_id”, APP_CLIENT_ID));
    vals.Add(new KeyValuePair<string, string>(“client_secret”, APP_CLIENT_SECRET));
    vals.Add(new KeyValuePair<string, string>(“scope”, scopes));
    vals.Add(new KeyValuePair<string, string>(“redirect_uri”, redirUri));

    string loginUrl = “https://login.microsoftonline.com/common/oauth2/v2.0/token&#8221;;

    //make the request
    HttpClient hc = new HttpClient();

    //form encode the data we’re going to POST
    HttpContent content = new FormUrlEncodedContent(vals)

    //plug in the post body
    HttpResponseMessage hrm = hc.PostAsync(loginUrl, content).Result;

    if (hrm.IsSuccessStatusCode)
    {
    //get the raw token
    string rawToken = hrm.Content.ReadAsStringAsync().Result;

    //deserialize it into this custom class I created
    //for working with the token contents
    AzureJsonToken ajt =
    JsonConvert.DeserializeObject<AzureJsonToken>(rawToken);
    }
    else
    {
    //some error handling here
    }
    }
    else
    {
    //some error handling here
    }

    return View(viewName);

    }

    One note – I’ll explain more about the custom AzureJsonToken class below.

     

    Tip #3 – Use the Correct Service Endpoint

    Similar to the first tip, I saw examples of articles that talked about using the v2 app model, but then pointed to service endpoints that use the v1 app model. So again, to be clear, your single service endpoint for v2 apps is https://outlook.office.com/api/v2.0/me/. If you see code examples that use https://graph.microsoft.com/v1.0/me/, that is the old endpoint. If you get a v2 token and try to use it against a v1 endpoint, then mostly bad things will happen.

     

    Tip #4 – Be Aware that Microsoft Has Pulled Out Basic User Info from Claims

    One of the changes that Microsoft made in v2 of the app model is that they have arbitrarily decided to pull out most meaningful information about the user from the token that is returned to you. They describe this here – https://azure.microsoft.com/en-us/documentation/articles/active-directory-v2-compare/:

    In the original Azure Active Directory service, the most basic OpenID Connect sign-in flow would provide a wealth of information about the user in the resulting id_token. The claims in an id_token can include the user’s name, preferred username, email address, object ID, and more.

    We are now restricting the information that the openid scope affords your app access to. The openid scope will only allow your app to sign the user in, and receive an app-specific identifier for the user. If you want to obtain personally identifiable information (PII) about the user in your app, your app will need to request additional permissions from the user.

    This is sadly classic Microsoft behavior – we know what’s good for you better then you know what’s good for you – and this horse has already left the barn; you can get this info today using the v1 model. Never one to be deterred by logic however, they are pressing forward with this plan. That means more work on you, the developer. What you’ll want to do in order to get basic user info is a couple of things:

    1. Include the “email” and “profile” scopes when you are obtaining your code / token.
    2. Ask for both a code and id_token when you redirect to Azure asking for a code (as shown in the C# example above).
    3. Extract out the id_token results into a useable set of claims after you exchange your code for a token.

    On the last point, the Microsoft documents say that they provide a library for doing this, but did not include any further detail about what it is or how you use it, so that was only marginally helpful. But hey, this is a tip, so here’s a little more. Here’s how you can extract out the contents of the id_token:

    1. Add the Security.IdentityModel.Tokens.Jwt NuGet package to your application.
    2. Create a new JwtSecurityToken instance from the id_token. Here’s a sample from my code:

    JwtSecurityToken token = new JwtSecurityToken(ajt.IdToken);

    1. Use the Payload property of the token to find the individual user attributes that were included. Here’s a quick look at the most useful ones:

    token.Payload[“name”] //display name
    token.Payload[“oid”] //object ID
    token.Payload[“preferred_username”] //upn usually
    token.Payload[“tid”] //tenant ID

    As noted above though in my description of the converged model, remember that not every account type will have every value. You need to account for circumstances when the Payload contents are different based on the type of account.

     

    Tip #5 – Feel Free to Use the Classes I Created for Deserialization

    I created some classes for deserializing the JSON returned from various service calls into objects that can be used for referring to things like the access token, refresh token, id_token, messages, events, contacts, etc. Feel free to take the examples attached to this posting and use them as needed.

    Microsoft is updating its helper libraries to use Office 365 objects with v2 of the app model so you can use them as well; there is also a pre-release version of ADAL that you can use to work with access tokens, etc. For those of you rolling your own however (or who aren’t ready to fully jump onto the pre-release bandwagon) here are some code snippets from how I used them:

    //to get the access token, refresh token, and id_token
    AzureJsonToken ajt =
    JsonConvert.DeserializeObject<AzureJsonToken>(rawToken);

    This next set is from some REST endpoints I created to test everything out:

    //to get emails
    Emails theMail = JsonConvert.DeserializeObject<Emails>(data);

    //to get events
    Events theEvent = JsonConvert.DeserializeObject<Events>(data);

    //to get contacts
    Contacts theContact = JsonConvert.DeserializeObject<Contacts>(data);

    You end up with a nice little dataset on the client this way. Here’s an example of the basic email details that comes down to my client side script:

    v2App_MessageCollection

    Finally, here’s it all pulled together, in a UI that only a Steve could love – using the v2 app model, some custom REST endpoints and a combination of jQuery and Angular to show the results (yes, I really did combine jQuery and Angular, which I know Angular people hate):

    Emails

    v2App_Emails

    Events

    v2App_Events

    Contacts

    v2App_Contacts

    Summary

    The new model is out and available for testing now, but it’s not production ready. There’s actually a good amount of documentation out there – hats off to dstrockis at Microsoft, the writer who’s been pushing all the very good content out. There’s also a lot of mixed and conflicting documents though so try and follow some of the tips included here. The attachment to this posting includes sample classes and code so you can give it a spin yourself, but make sure you start out by at least reading about how to set up your new v2 applications here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-v2-app-registration/. You can also see all the Getting Started tutorials here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-appmodel-v2-overview/#getting-started. Finally, one last article that I recommend reading (short but useful): https://msdn.microsoft.com/en-us/office/office365/howto/authenticate-Office-365-APIs-using-v2.
    You can download the attachment here:


    0 0

    For this post, we are glad to host Thomas Roettinger, Program Manager in Microsoft’s Private Cloud Solutions team, announcing the availability of artifacts that will help you deploy the Remote Console feature in Microsoft Cloud Platform System (CPS) Standard.


    The Remote Console feature

    First of all, what is the Remote Console feature? Have you ever been in a situation where you misconfigured the network settings and locked yourself out of a virtual machine? Or Remote Desktop is disabled but you need to access the VM console? If you are a cloud administrator, and have access to tools such as Hyper-V Manager or the Virtual Machine Manager (VMM) console, this is not a huge deal. You can just connect to the VM by using the Virtual Machine Connection tool in Hyper-V, or the Connect via Console feature in VMM.

    However, as a tenant in a cloud environment where you don't have direct access or permissions to the infrastructure management tools, you would typically have to open a service ticket to get the issue corrected by your administrator. But guess what? Windows Azure Pack (included with CPS Standard) has a feature that enables a tenant to connect to the Console session and recover from such situations.

    To learn more about the Remote Console feature, please see https://technet.microsoft.com/library/dn469415.aspx.

    Automation

    Customers told us that the manual deployment of Remote Console is complex due to the various services that have to be configured and the placement of certificates. Based on that feedback, we have provided a way to help automate the deployment CPS Standard.

    The automation consists of two parts:

    • A VMM service template. This deploys a highly-available VM with Remote Desktop Gateway server (RD Gateway) and the Authorization Plugin installed.
    • A Service Management Automation (SMA) runbook. This runbook deploys and configures the certificates in RD Gateway, in VMM, and on the Hyper-V hosts.

    Downloading the package

    The download package contains a deployment guide, a VMM service template, and a configuration runbook. You can download the CPS Standard Remote Console artifacts through the Microsoft Web Platform Installer. To locate the artifacts, after you download the Web Platform Installer, do the following:

    a. At the bottom of the Web Platform Installer dialog box, click Options.

    b. In the Custom Feeds box, enter the following URL: http://www.microsoft.com/web/webpi/partners/servicemodels.xml

    c. Click Add feed.

    d. In the search box, type CPS Standard.

    e. Locate the resource, and then click Add.

    f. Click Install. During the installation, note the package location.

    Detailed deployment instructions are included in the download.

    Summary

    We hope you like the time saving and reduction of deployment issues. We look forward to hearing your feedback on our Cloud Platform System solutions.


    0 0

    Summary: Learn how to find Published Windows PowerShell modules in the PowerShell Gallery.

    Hey, Scripting Guy! Question How can I use Windows PowerShell to find modules that are published in the Windows PowerShell Gallery?

    Hey, Scripting Guy! Answer Use the Find-Module cmdlet in Windows PowerShell 5.0. This example finds modules related to the ISE:

    Find-Module *ISE*


    0 0

     

    O serviço de Active Directory Federation Service nunca esteve tão em evidência como nestes últimos anos devido à adoção de serviços na nuvem pelas empresas. Conectar a identidade do Active Directory on-premises com serviços na nuvem pode ser uma desafio. Diferente dos protocolos NTLM e Kerberos que não foram desenvolvidos para a Internet o ADFS atua como um provedor de identidade para  aplicações web modernas. Os desenvolvedores de software estão trabalhando com frameworks de desenvolvimento que remove do código a sua identidade e delega para um terceiro esta função. Você já deve ter acessado algum site de compra na Internet que além de permitir de criar uma usuário e senha também a opção de autenticar com sua conta do Facebook, Google ou Microsoft Live ID, estas empresas tem seus serviços de provedor de identidade. Na Microsoft exitem 3: Microsoft Live ID, Azure AD e ADFS.

    O ADFS 3.0 suporta três protocolos de sign-in:

    WS-Federation

    SAML-P

    • Desenvolvido por um comitê técnico do OASIS;
    • Sigla de Security Assertion Markup Language (Linguagem de Marcação de Assertivas de Segurança);
    • Padrão para intercâmbio de informações de autenticação e autorização entre diferentes domínios de segurança;
    • Implementado como um protocolo baseado na troca de mensagens XML, atende a diversos cenários de aplicação web, incluindo SSO e propagação de identidade;
    • Utilizado para integração com aplicações terceiras como: Oracle WebLogic, Lotus Notes e SAP;
    • Token no formato SAML 2.0;
    • Mais informações em http://go.microsoft.com/fwlink/?LinkId=193996.

    OAuth

     

    Neste artigo vou explicar passo a passo e dar dicas para você instalar da melhor maneira o ADFS 3.0 no seu ambiente.

    O ADFS 3.0 exige pouca memória, disco e processador e em 100% dos casos que trabalhei com ADFS as máquinas eram virtuais. 2 processadores, 4 GB de memória e 100 GB de disco são suficientes. O Windows Internal Database ( nativo no Windows Server ) também é uma barata e eficiente escolha como banco de dados e será usada neste exemplo.

    O ADFS é totalmente sensível e dependente de certificados digitais, você precisará de 3, mas só um deles o Service Communications precisará ser público, confiável e válido na Internet. Não use certificados emitidos por Certificates Authorities Internas como o ADCS nem certificados autoassinados para esta finalidade, além de não confiável e não estar na lista de Trust Root Certificate Authorities das máquinas a lista de certificados revogados raramente estão disponíveis para consulta.

    NÃO ECONOMIZE AQUI, COMPRE UM CERTIFICADO ! No meu laboratório usarei o certificado *.labpfe.com que foi emitido pela DigiCert válido por um ano. Importe o certificado na máquina com o respectiva chave privada. (já dá para notar que certificados Wild Card são suportados)

    Capture

    Se o seu Active Directory está no nivel functional WS 2012 / R2 poderá utilizar uma conta de serviço chamada gMSA ( Group Managed Service Accounts ). Diferente de uma conta de usuário que é usada como serviço a gMSA você não define uma senha explicitamente, existe uma negociação direta entre o serviço e o serviço de KDC ( key distributed center ) do Domain Controller correspondente. O principal benefício deste modelo é não afetar o serviço quando a senha expira ou é alterada. Fica a dica. Mas procure saber se o seu serviço suporta este tipo de conta, o ADFS suporta e vamos ensiná-lo/a como configure-lo.

    Capture1

    Para habilitar o gMSA no Active Directory você precise executar o seguinte commando:

    C:\Add-KdsRootKey

    Você poderá ter que esperar até 10 horas depois deste commando para que este serviço esteja disponível em todos os controladores de domínio da floresta, não tenha pressa, aguarde este período.

    No meu laboratório eu executei o commando forçandoa hora como se fosse 10 horas atrás permitindo que eu continuasse o setup do ambiente. NÃO FAÇA ISSO EM PRODUÇÃO. Faça por fases, execute o commando em um dia e o resto do setup no outro.

    Capture2

    Uma dica rápida para ver o nível functional da floresta é executar o commando Get-ADForest

    Capture3

    Instale a função Active Directory Federation Services

    Capture4

    Aguarde o fim da instalação e em seguida clique no ícone para iniciar o setup de configuração. Escolha a opção Create the first federation server in a federation server farm.

    IMPORTANTE: Não existe mais a opção em instalar em Standalone ( um único servidor ) mesmo que você tenha uma única máquina a criação em Farm Mode é mandatória.

    Capture9

    Você precisa ser um Domain Admin configurar o ADFS não basta ser um Local Admin. Por padrão ele sugere o mesmo usuário que está “logado” na máquina.

    Capture10

    No campo SSL Certificate escolha seu certificado, informe o nome da sua farm, no meu caso adfs.labpfe.com e o nome amigável que aparecerá na tela de login do ADFS Forms Based Authentication.

    Capture11

    Aqui chegou a hora de informar a conta de serviço gMSA, só escolha este modelo se atendeu os requisites explicados no início deste artigo ( nivel functional da floresta e o Root Key habilitado e aguardado no mínimo 10 horas). No campo informe um nome para a conta, observe que não é necessário digitar uma senha. Caso contrário escolha a opção “Use an existing domain user account or group MSA” e informe uma conta e senha previamente criada no AD ( este conta pode ser Domain Users somente ).

    Capture12

    Deixe a opção Create a database on this server using Windows Internal Database. Se preferir utilize um servidor SQL server existente. Lembre-se ! Se o seu SQL Server não estiver em Cluster e ficar indisponível toda autenticação no ADFS será comprometida.

    Capture13

    Opcionalmente instale tudo pelo PowerShell, este é um exemplo do commando. Simples mas não muito usual porque você precisará ter o ThumbPrint do certificado em mãos.

    Capture15

    Aguarde o fim do setup, em seguida abra o DNS Manager e crie um registro A com o nome da sua farm que foi definida no setup apontando para o IP do servidor.

    IMPORTANTE : NÃO USE TIPO DE REGISTRO CNAME ( APELIDO ) POIS NÃO É SUPORTADO. SE EXISTIR UM BALANCEADOR DE CARGA DE REDE NA FRENTE APONTE PARA O ENDEREÇO IP VIP

    Capture17

    Faça um teste simples de ping e veja se o novo da farm esta respondendo com sucesso.

    Capture18

    DICA : Quando acessar o ADFS pela primeira vez inevitavelmente terá a segunte experiência : um pop up será aberto para autenticação e o single sign on não acontecerá. Isso acontece não por um problema no AFDS mas sim por uma característica do Internet Explorer que pelo fato de estar acessando um serviço com nome público neste caso adfs.labpfe.com o navegador acredita que não esta na intranet e sim na Internet. Para corrigir isso faça o seguinte :  Abra o internet explorer na máquina do usuário ou na sua, escolha opções de Internet, guia security, escolha local intranet e clique no botão Sites, e depois em Advanced. Adicine aqui o endereço do seu ADFS iniciando em https:

    Capture19

    É claro que aqui estará pensando em como fazer isso em massa/lote para vários computadores da rede, afinal todos eles terão que ter esta configuração. Recomendo este artigo que explica como fazer isso.

    https://blog.thesysadmins.co.uk/group-policy-internet-explorer-security-zones.html

    Uma vez configurado, acesse o endereço https://fqdnadfsname/adfs/ls/idpinitiatedsignon.aspx. No meu caso https://adfs.labpfe.com/adfs/ls/idpinitiatedsignon.aspx. Clique em Sign In. ( todo ADFS bem configurado precisa expor este endpoint na Internet).

    Capture21

    Ao clicar em Sign In você deve ser autenticado diretamente ( isso é logon único, sso, foi aproveitado o token Kerberos da sua máquina). Se abrir um Pop up você não configurar o local intranet sites no item acima corretamente.

    Capture22

    Dica agora sobre o certificado. Por padrão o certificado gerado para Token Signing e Decrypting é auto-assinado e válido por um ano. Um ano é muito pouco tempo em especial para o Token Signing.

    Capture23

    Vamos definir um periodo maior de tempo para estes certificados, vamos alterar para 5 anos ou 1825 dias.

    Capture24

    Podemos ver o resultado da alteração executando o commando Get-ADFSProperties

    Capture26

    Execute os commandos abaixo para forçar de maneira urgente a alteração dos certificados Token Signing e Decrypting

    Capture27

    Ao abrir a console do ADFS/Certificates, a data de expiração do certificado foi alterado para 2021 e não mais 2017 como originalmente ( 5 anos )

    Capture28

    Outra dica bacana é alterar o logo de ilustração padrão do ADFS, aquela imagem grande Azul a esquerda do formulário.

    Capture29

    Copie uma imagem com resolução 1420 x 1080 pixesls da sua preferência e execute o commando acima referenciando a imagem. Veja o resultado como ficou legal. Se você quer saber todos os commandos para alterar outras áreas do formulário leia este artigo: https://technet.microsoft.com/en-us/library/dn280950.aspx

    Capture30

    DICA:  O ADFS Proxy no ADFS 3.0 não existe mais. O serviço que expôe o ADFS na Internet agora se chama Web Appliction Proxy (WAP) e é uma subfeature do Remote Access e não mais do ADFS.

    image

    Não publique diretamente o ADFS na rede local diretamente na Internet. O WAP é alvo para um artigo dedicado somente para ele, pois além de atuar como ADFS Proxy ele tem outra função de publicação muito interessante que explicarei em um novo artigo. Até lá.


    0 0

    This post was taken from Office Blogs

    Welcome to 2016! You may remember the release of Office 2016 and the new features it brought focused on collaboration. In the new year we will continue to plan, develop and release new features to help you collaborate—especially inside the classroom. Before these new features are released, let’s take some time to look at what’s already available to help you to collaborate in and out of the classroom!

    OneNote Class Notebook—the ultimate collaboration tool

    If you have not yet started using OneNote, you are missing out! OneNote can be used anywhere, on any device, with anyone (get it here)! OneNote Class Notebooks are described as an Office 365 wizard that allows teachers to quickly set up the ideal OneNote collaborative environment for the classroom, including public and private spaces.

    Check out what some teachers have to say about OneNote:

    Get started collaborating in the classroom with OneNote Class Notebook Creator!

    (Please visit the site to view this video)

    Sway—the best digital storytelling app for interactive class materials, presentations, projects and more

    Sway is a new digital storytelling app from Office that is great for project or problem-based learning. Teachers can create interactive web-based lessons, assignments, project recaps, newsletters, and more—right from a phone, tablet, or browser. Students can collaborate and use Sway to create engaging reports, assignments, projects, study materials, and portfolios. Sways are easy to share with the class or the world and look great on any screen.​​​ Sway is currently available on the web, on Windows 10 PCs/tablets, as well as on iPhones and iPads. Click here to learn more about Sway for education.

    (Please visit the site to view this video)

    Office Online—all the tools you need on the web

    Did you know that Word, Excel and PowerPoint are available on the web? So are OneNote, Sway, Outlook Mail, Contacts and Calendar! With Office Online, students can work in the same document and see changes happening in real-time by other students.

    • Have a sign up form for dates to bring snacks? Send a link to students so they can all open it up and add when they want to bring their snack.
    • Want students to work as partners on an essay about Grace Hopper and her contributions to Computer Science? Students can create, share and edit a document in real-time and see the edits their partners are making as they make them.

    Office Online saves as you go. Everyone on your team can see changes to text and formatting as they happen, so you can all stay on the same page as your work evolves. There’s even Skype chat integration to have discussions about your document as you work. If you haven’t tried it yet, go to office.com to get started!

    OneDrive file storage in the cloud—automatically save, store, share and collaborate

    OneDrive is a cloud storage where you can save documents on the web and be able to access them from any device. OneDrive makes it easy to use Office Online: Word, PowerPoint, Excel or OneNote on the web! Best of all, if you’re not in the document at the time your peers make the edits, you’ll get an email notifying you of the changes. Every change is synced in the document, automatically saved on OneDrive and made visible to the team, so you can catch up on what you’ve missed.

    Skype in the classroom—guest speakers, virtual field trips or classrooms from around the globe

    Skype in the classroom is a free global teaching community. Play Mystery Skype and partner with a classroom from around the globe teaching students about differences and similarities with other cultures. Connect with a guest speaker and video stream them into your classroom to inspire students and teach them about new careers. Go on a virtual field trip to South Africa to learn about the endangered African penguins or head over to the lab to learn about plankton and why they are so important. Stuck on developing curriculum ideas? Check out this long list of live Skype lessons.

    Sound awesome? Register to take part here!

    Sound like cool tools you would use in your classroom? Find more information about this here.

    —Joanie Weaver, program manager for the Office Core team


    0 0

    みなさん、こんにちは。村木ゆりかです。

     

    本日、脆弱性緩和ツール EMET (Enhanced Mitigation Experience Toolkit)の新しいバージョン EMET 5.5 (正式版) を公開しました。(2015 年 10 月に公開していたベータ版正式版になりました)

    ツールは、こちらからダウンロードできます。

    日本語ユーザー ガイドは現在準備中ですのでいましばらくお待ちください。

     

    今回のバージョン 5.5 における主な変更点は以下です:

     

    Windows 10 に対応

    Windows 10 に正式対応している EMET は、EMET 5.5 (ベータ版含む) のみです。以前のバージョン EMET 5.2 は対応していませんので注意してください。

    また、Windows 10 には、Device Guard, Control Flow Guard (制御フローガード), AppLocker といった、EMET と同じかあるいはより良い緩和策がすでに含まれています。

    なお、Edge には EMET の緩和策は適用できません。Edge にはすでに必要とされるサンドボックスなどの攻撃緩和策が含まれているためです。

     

    Windows 10 の新機能である信頼されていないフォントをブロックする緩和策のサポート

    Windows 10 では信頼されていないフォント (%windir%/Fonts ディレクトリ外にインストールされているすべてのフォント) から発生する攻撃 (例えばフォント解析中のローカルでの特権昇格など) からユーザーを保護することができます。EMET ではほかの緩和策と同様、システム全体、あるいはアプリケーション個別に緩和策を設定することができます。参考:「エンタープライズ内の信頼されていないフォントのブロック

     

    グループ ポリシーによる緩和策の設定がより便利に

    グループ ポリシーで EMET の設定を構成すると、専用のレジストリに反映されるようになりました。

    グループ ポリシーで設定した場合:HKLM\SOFTWARE\Policies\Microsoft\EMET およびHKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options

     

    グループ ポリシー以外で設定した場合:

    HKLM\SOFTWARE\Microsoft\EMET およびHKCU\SOFTWARE\Microsoft\EMET

    ※ 64 ビット環境の場合は Wow6432Node 配下になります                             

     

     


     

     ■ Enhanced Mitigation Experience Toolkit (EMET) 基本解説 連載

     

    ■ 関連リンク

     

     

     


    0 0

    A few weeks ago we provided a Step-By-Step detailing how to configure Azure Remote App for QuickBooks . In this Step-By-Step we will be taking this one step further and adding in a domain that is hosted in Azure. For those of you unaware of Azure RemoteApp, it is Microsoft RemoteApp backed by Remote Desktop Services in Azure. This service provides remote secure access to applications from different devices; users can access their applications on their devices while you manage the data and access...(read more)

    0 0

      

    Mens tre finalister den 11. februar dyster i disciplinerne 100 meters regneark, grafballet og formel-wrestling til DM i Excel, kan du i mindre stress- og svedfyldte omgivelser høre om Excel i undervisningsbrug, få smarte fif til Excel eller høre oplæg fra blandt andet Microsofts Office designer, Jakob Nielsen.

    Som en del af finalearrangementet har vi et begrænset antal pladser til finalen, som gives efter først til mølle princip. Tilmeld dig her.

    Mens selve finalen bliver vist via Skype Broadcast, kan du gå rundt mellem forskellige områder med hver sit tema. I et af områderne har du mulighed for at gøre os klogere på Excel ved at fortælle, hvad der er godt og dårligt ved Excel. Og har du nye idéer eller forbedringsforslag, vil vi også meget gerne høre dem. De andre områder fokuserer på uddannelse, Office og produkter. Ligesom du også kan lære Excel-tricks af 4D eller visualisere data med Power BI.

    Arrangementet koster ikke andet end din tid og interesse og foregår torsdag den 11. februar fra 15:00-18:00 hos Microsoft, Kanalvej 7, 2800 Kongens Lyngby

     

    Agenda

    14:45 

    Vi åbner dørene for registrering 

    15:15  

    Velkomst  v. CEO Marianne Steensen. Microsoft Danmark 

    15.20 

    Keynotespeak  v. Excel Designer Jakob Nielsen, Microsoft HQ Redmond 

    15.30- 15.40 

    Præsentation af finalisterne og 4D  v. Anne-Marie Gjellerup, Microsoft Danmark

    16:00 

    Finalen foregår i aflukket rum, men livestreames  Deltagere kan besøge forskellige inspirations-zoner og nyde en forfriskning 

    16:45 

    Finalen afsluttes og dommere voterer   

    17:00 

    Data Visualisering v. Thomas Black-Petersen, Inspari  

    17:15 

    Danmarksmesteren i Excel annonceres og fejres  

    17:30 

    Networking & Tak for i år!


    0 0

    Gerade mit der demnächst verfügbaren Microsoft Cloud in Deutschland stellt sich die Frage, wie man Storage oder virtuelle Server innerhalb Azure verschiebt. Das wollen wir uns etwas genauer ansehen, ist auch nicht ganz so einfach. Streng genommen gibt es auch kein Verschieben, sondern wir können nur Kopieren und anschließend Löschen. Ganz ehrlich ist mir das auch lieber, immer noch ein Fallback zu haben…

    Das wichtigste vorweg: Letztendlich ist das Verschieben einer VM nicht wesentlich anders als das Verschieben von Storage, es sind nur ein paar Extrapunkte zu beachten. Und bevor alle am Ende enttäuscht sind: Wir konzentrieren uns hier auf das Verschieben der VM, nicht auf die Übernahme aller Details der Konfiguration. So ein paar Hinweise gebe ich zwar, aber etwas Nacharbeit kann schon nötig sein, insbesondere bei komplexen Netzwerk-Strukturen.

    Hatte ich eigentlich schon erwähnt, dass wir das ganz in PowerShell machen werden? Nein? Na dann jetzt halt Smiley. Damit wir nicht immer mit Platzhaltern in den Befehlen arbeiten müssen, einigen wir uns mal auf das folgende Szenario (Variablen beginnen immer mit $source für Quellangaben und mit $dest für Zielangaben)

    Quelle: $sourceName (Name der alten VM), $sourceService (Name des bisherigen Service)

    Ziel: $destName (Name der Ziel-VM), $destService (Name des neuen Service)

    Den gesamten Script kann man sich auch kopieren von GitHub.

    Die Quelle

    Wir fangen an, in dem wir erst mal alle Informationen über unsere VM sammeln:

    $sourceVM = Get-AzureVM –Name $sourceName –ServiceName $sourceService

    Wir speichern sicherheitshalber mal die komplette Konfiguration lokal in eine XML-Datei:

    $sourceVM | Export-AzureVM –Path “c:\temp\export.xml”

    Zeit, sich Informationen über die Disks zu besorgen. Azure unterscheidet zwischen OS- und Datendisks, wir holen uns natürlich beides:

    $sourceOSDisk = $sourceVm.VM.OSVirtualHardDisk

    $sourceDataDisks = $sourceVm.VM.DataVirtualHardDisks

    …und setzen auch gleich den Storage Context. Wer jetzt grad nicht weiß, was ein Storage Context ist: Das ist ein Objekt, dass quasi die Storage-Informationen über den Namen, das Account und den Zugriffsschlüssel zusammenfasst:

    $sourceStoragename = ($sourceOSDisk.MediaLink.Host -split "\.")[0]

    $sourceStorageAccount = Get-AzureStorageAccount –StorageAccountName
              
    $sourceStorageName

    $sourceStorageKey = (Get-AzureStorageKey -StorageAccountName
               $sourceStorageName).Primary

    $sourceContext = New-AzureStorageContext –StorageAccountName
               $sourceStorageName -StorageAccountKey $sourceStorageKey

    Sieht etwas komisch aus, wie wir uns den Namen des Storages besorgt haben (1. Zeile), aber der ist versteckt im Medialink, und dort der erste Teil des Hostnamens. Wir trennen also quasi den Hostnamen an den Punkten und nehmen den ersten Teil (Array-Nummerierung startet bei Null!). Wenn jemand eine bessere Art weiß, bitte melden, ich finde, das zeigt super die Mächtigkeit von PowerShell…

    Auf der Quellseite sind wir fertig. Wer die abgespeicherte Konfigurationsdatei sich anschaut, findet dort alle wichtigen Angaben für unsere neue Umgebung, also insbesondere die Größe und eventuelle Endpunkte. Wer nicht in die Datei schauen möchte, der kann das auch so erfahren:

    $sourceVM.VM.RoleSize

    $sourceVM | Get-AzureEndpoint

    So. Dann beenden wir die VM mal:

    Stop-AzureVM –Name $sourceName –ServiceName $sourceService

    Wir erinnern uns, dass uns – falls das die letzte VM in diesem Service ist – die IP-Adresse abhanden kommt. Das können wir verhindern, indem wir “-StayProvisioned” anfügen, dann kostet die VM aber weiterhin…

    Das Ziel

    Wechseln wir mal zur anderen Seite, in unsere zweite Subscription

    Select-AzureSubscription –SubscriptionName “Zielsubscription”

    Set-AzureSubscription –SubscriptionName “Zielsubscription”
              –CurrentStorageAccountName “zielstorage”

    Hinweis: Die Azure-Umgebung ist mit der Subscription verbandelt, daher ist es wichtig, Dinge wie zum Beispiel den StorageContext zu definieren, bevor man die Subscription wechselt, da eine andere Subscription eventuell eine andere Azure-Umgebung hat und damit andere Endpunkte. Wichtig wird das für die Microsoft Cloud in Deutschland, da ist das nämlich so…

    Wir haben gewechselt, also alles bereit, und los geht’s wieder mit dem StorageContext, diesmal für’s Ziel:

    $destStorageName=”zielstorage”

    $destStorageAccount = Get-AzureStorageAccount -StorageAccountName
              $destStorageName

    $deststoragekey= (Get-AzureStorageKey -StorageAccountName
              $destStorageName).Primary

    $destContext   = New-AzureStorageContext –StorageAccountName
              $destStorageName -StorageAccountKey $destStorageKey

    Für die Folge gehen wir davon aus, dass wir in den Storagecontainer “vhds” speichern und dass dieser bereits existiert. Falls nicht, dann bitte anlegen mit

    New-AzureStorageContainer –Context $destContext –Name vhds

    Das Kopieren

    Jetzt gehen wir einfach alle Disks durch, die wir haben, und kopieren sie von Alt nach Neu. Das Kommando dazu heißt Start-CopyAzureStorageBlob und braucht

    • den alten und den neuen Container,
    • den alten und den neuen Blobnamen und
    • den alten und den neuen StorageContext.

    Das Kommando startet (wie der Name schon vermuten lässt) nur den Kopiervorgang, wenn wir das zurückgelieferte Objekt speichern, dann können wir über Get-AzureStorageBlobCopyState den aktuellen Status des Jobs anschauen. Übrigens: Nein, ich weiß nicht, wieso hier die Reihenfolge der Copy-Azure-Storage-Blob-Teile so unterschiedlich ist.

    Der folgende Codeblock geht also durch alle Disks, startet das Kopieren, und frägt alle 10 Sekunden den Status ab:

    $allDisks = @($sourceOSDisk) + $sourceDataDisks

    $destDataDisks = @()

    foreach($disk in $allDisks)

    {

        $sourceContName = ($disk.MediaLink.Segments[1] -split "\/")[0]

        $sourceBlobName = $disk.MediaLink.Segments[2]

        $destBlobName = $sourceBlobName

        $destBlob = Start-CopyAzureStorageBlob
                 –SrcContainer
    $sourceContName
                 -SrcBlob $sourceBlobName
                 -DestContainer vhds
                 -DestBlob $destBlobName
                 -Context $sourceContext -DestContext $destContext
                 -Force

        Write-Host "Copying blob $sourceBlobName"

        $copyState = $destBlob | Get-AzureStorageBlobCopyState

        while ($copyState.Status -ne "Success")

        {

            Write-Host "$copyState.BytesCopied von $copyState.TotalBytes"

            sleep -Seconds 10

            $copyState = $destBlob | Get-AzureStorageBlobCopyState

        }

        If ($disk -eq $sourceOSDisk)

        {

            $destOSDisk = $destBlob

        }

        Else

        {

            $destDataDisks += $destBlob

        }

    }

     

    Die Nacharbeit

    Fast fertig. Wir haben jetzt alle Disks kopiert in unseren neuen Storage. Bisher sind das aber im Ziel nur Dateien, noch keine Disks. Das machen wir mit Add-AzureDisk, und zwar einmal für die OS-Disk und dann noch für jede Datendisk. Sieht erneut etwas wild aus, wie wir auf die Namen kommen, an dieser Stelle ein herzlicher Dank an Devon Musgrave, von dem diese Beispiele stammen!

    Add-AzureDisk -OS $sourceOSDisk.OS -DiskName $sourceOSDisk.DiskName
              -MediaLocation $destOSDisk.ICloudBlob.Uri

    foreach($currentDataDisk in $destDataDisks)

    {

        $diskName = ($sourceDataDisks | ? {$_.MediaLink.Segments[2] -eq
              $currentDataDisk.Name}).DiskName

        Add-AzureDisk -DiskName $diskName
              -MediaLocation $currenDataDisk.ICloudBlob.Uri

    }

    Und schon haben wir lauter Disks anstatt nur Blobs. Der Rest ist einfach, erfordert aber wie oben angedroht eventuell Nacharbeit. Soll aber hier jetzt nicht mein Problem sein, ich lege einfach eine neue VM mit irgendeiner InstanceSize an, nur gebe ich keinen ImageName an, sondern einen DiskName, nämlich unsere oben kopierte OS-Disk (nicht wundern, wir haben die Namen der Disk ja einfach übernommen):

    $destVM=New-AzureVMConfig -name $destName -InstanceSize Small
              -DiskName $sourceOSDisk.DiskName

    New-AzureVM –ServiceName $destService –VMs $destVM

    Jetzt wäre der Moment, um nachzuarbeiten, also Endpunkte, Netze etc., und dann muss die VM nur noch gestartet werden. Admin-User und Passwort sind übrigens ja Teil der VM und nicht der Azure-Konfiguration, das wird also alles direkt übernommen.

    Die Zusammenfassung

    Der Hauptteil der Arbeit liegt offensichtlich darin, alle Informationen zusammen zu bekommen, also Storagename, Diskname etc, aber wenn wir alles zusammen haben, dann geht das Kopieren mit einem einzigen Befehl. Nicht vergessen sollten wir, die alten Disks ggf. zu löschen, braucht ja keiner mehr. Es sollte jetzt eine leichte Übung sein, das Beispiel so abzuändern, dass man damit ganz einfach auch andere StorageBlobs verschieben kann: Einfach die Wandlung in Disks weglassen und das Start/Stop der VMs logischerweise.

    Noch ein Wort zur Microsoft Cloud in Deutschland: Für den ein oder anderen verblüffend, aber der Mechanismus funktioniert ohne jegliche Änderung auch für das Kopieren zwischen der Public Azure Cloud und der Microsoft Cloud in Deutschland. Es dauert evtl. etwas länger, aber das macht man ja auch nicht jeden Tag. Wichtig ist nur wie oben schon erwähnt, dass man den StorageContext etc. in der jeweiligen Subscription erzeugt. Macht man das nicht, dann fügt Azure unter Umständen beispielsweise beim Erstellen einer URL (MediaLink) die falschen Suffixe hinzu. Die Unterschiede werden klarer, wenn man mal ein Get-AzureEnvironment macht…

    Viel Spaß beim Kopieren!


older | 1 | .... | 832 | 833 | (Page 834) | 835 | 836 | .... | 889 | newer