Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Resources for IT Professionals

older | 1 | .... | 809 | 810 | (Page 811) | 812 | 813 | .... | 889 | newer

    0 0

    As an Escalation Engineer for Exchange Online we get lots of questions on how to stop email spoofing. It is a very broad topic and there are a number of things that can be done. Below is sone general guidance I provide to my customers when this topic comes up.

    Last updated January 6th 2016

    =================================================================

     Combating email spoofing can be tricky, what is right for another organization may not necessarily be right for your organization; moreover, it’s always important to understandyou will never be able to block 100% of spoof attacks 100% of the time.

     

    We recommend, when developing the strategy that is best for you, to look at these four areas:

     

    SPF/DKIM/DMARC

    The link below provides guidance on Using DMARC in Office 365

    Http://blogs.msdn.com/b/tzink/archive/2014/12/03/using-dmarc-in-office-365.aspx 

    DKIM outbound signing is now enabled for your default onmicrosoft.com domain. But in order to enable for vanity domains in which you manage the DNS you must add the two CNAME records as outlined in the article below.

    External DNS records required for SPF

       https://support.office.com/en-us/article/External-Domain-Name-System-records-for-Office-365-c0531a6f-9e25-4f2d-ad0e-a70bfef09ac0#BKMK_SPFrecords

     

    Customize an SPF record to validate outbound email sent from your domain

    https://technet.microsoft.com/en-us/library/dn789058(v=exchg.150).aspx

     

     

    User Education

    Even with the most restrictive settings it is import to educate your user community to be able to spot red flags of spoofing attempts. If for whatever reason your user gets an email from itsupport@cont0so.com     they should be able to identify it does not look like legitimate email from your IT support staff.

     

    Connection/SPAM Filters/Transport Rules

    The links below provide in depth guidance on configuring your SPAM filters and advanced features that can help fine tune them to your specific needs

     

    Configure the connection filter policy

    ** You can add IP  here to bypass filtering for email from these trusted sources if, and only if, those sources are already scanning/filtering mail before sending it on.

    https://technet.microsoft.com/en-us/library/jj200718(v=exchg.150).aspx

     

    Configure your spam filter policies

    https://technet.microsoft.com/en-us/library/jj200684(v=exchg.150).aspx 

     

    Advanced Spam Filtering Options

    **Proceed with caution setting some of the features and they can be very restrictive and generate a lot of false positives, especially the option to quarantine SPF hard fail.

    https://technet.microsoft.com/en-us/library/jj200750(v=exchg.150).aspx  

     

    (Not) Using the Additional Spam Filtering option for SPF hard fail to block apparently internal email spoofing

    http://blogs.msdn.com/b/tzink/archive/2015/07/21/not-using-the-additional-spam-filtering-option-for-spf-hard-fail-to-block-apparently-internal-email-spoofing.aspx

     

     

    Contingency/Action plans

    As stated earlier you will never be able to block 100% of malicious email 100% of the time. When malicious/spoofed email does get though, develop an action plan including but not limited to:

     

    •        Resetting the password on any compromised accounts

    •        Running Malware/virus scans on affected machines

    •        Using the Search-Mailbox to seek out and delete identified malicious email - https://technet.microsoft.com/en-us/library/dd298173(v=exchg.150).aspx   

    •        Using Transport rules to help suppress the subsequent delivery of identified messages.

    •        Using transport rules to block executable content:  http://blogs.msdn.com/b/tzink/archive/2014/04/08/blocking-executable-content-in-office-365-for-more-aggressive-anti-malware-protection.aspx 

    •        Submit sample messaged to Microsoft for analysis https://technet.microsoft.com/en-us/library/jj200769.aspx

    •        Submit suspected malware to our protection center https://www.microsoft.com/security/portal/submission/submit.aspx

     

     

    Some additional related links:

     

    Anti-spam and anti-malware protection

    https://technet.microsoft.com/en-us/library/jj200731(v=exchg.150).aspx

     

    Best practices for configuring EOP

    https://technet.microsoft.com/en-us/library/jj723164(v=exchg.150).aspx

     

    Terry Zink: Security Talk. Terry is one of our program managers EOP.

    http://blogs.msdn.com/b/tzink/


    0 0

    Storing JSON Data in SQL Server

    Since external systems format information as JSON text, JSON is also stored in SQL Server as text. You can use standard NVARCHAR columns to store JSON data. A simple table where some information stored as JSON is shown in the following example:

    CREATE TABLE Person (
     Id int IDENTITY PRIMARY KEY NONCLUSTERED,
     FirstName nvarchar(100) NOT NULL,
     LastName nvarchar(100) NOT NULL,

     InfoJson nvarchar(max)
     
    )WITH(MEMORY_OPTIMIZED=ON)

    Here you can see the main difference between traditional relational-only and document-only systems and the hybrid model that SQL Server 2016 provides. In SQL Server 2016, you can combine both scalar columns (FirstName and LastName in this example) and columns that contain JSON (InfoJSON in this example).

    In SQL Server, you can organize your data as one or many scalar or JSON columns and combine them with Spatial or XML columns if needed. In the same table, you can combine standard relational columns that enable faster access and JSON columns that provide flexibility and rapid application development. Unlike relational-only or document-only storage where you need to choose between two principles of data modeling, SQL Server offers you a hybrid data storage model where you can use the best of both data modeling methods.

    Although JSON is stored in a text column, it is not just “a plain text." SQL Server has built-in support for optimizing storage of text columns using various compression mechanisms such as UNICODE compression that can provide up to a 50% compression ratio. You can also store JSON text in column store tables or compress it explicitly using the built-in COMPRESS function that uses the GZip algorithm.

    JSON is fully compatible with any SQL Server component or technology that works with NVARCHAR data. In the example above, JSON is stored in an In-memory OLTP (Hekaton) table that provides extreme processing performance. You can store JSON in standard tables, column store indexes, or in FILESTREAM. You can also load it from Hadoop using Polybase external tables, read it from file systems, stretch it to Azure SQL, use any replication method, and more. If you combine tables that store JSON documents with other SQL Server features, such as Temporal or Row-Level Security, you might find some powerful features that are not available in the existing document databases.

    If you don’t want to keep JSON as a free text format, you can add a validation that verifies that JSON in the text column is properly formatted using standard CHECK constraints and ISJSON function:

    ALTER TABLE Person
    ADD CONSTRAINT [Content should be formatted as JSON]
     CHECK(ISJSON( InfoJSON )> 0 )

    This is a standard SQL Server check constraint that enables you to validate whether the text stored in the JSON column is properly formatted. This constraint is optional – you can leave a plain text column as in the previous example; however, your queries might fail at runtime if your JSON text in a row is not properly formatted or if you don’t add the ISJSON condition in the WHERE clause to exclude invalid JSON columns.

    Since JSON is represented as text, you don’t need to make any changes in your client applications, wait for new drivers, or change protocol. You can read or write JSON documents in your C#, Java, and Node.js applications as standard string values. JSON can be loaded in ORM models as string fields and be directly sent to JavaScript client-side code via Ajax requests. Any ETL tool can also load or read JSON because there is no new format or interface.

    Built-in functions for JSON processing

    SQL Server 2016 provides functions for parsing and processing JSON text. JSON built-in functions that are available in SQL Server 2016 are:

    • ISJSON( jsonText ) checks if the NVARCHAR text is properly formatted according to the JSON specification. You can use this function to create check constraints on NVARCHAR columns that contain JSON text
    • JSON_VALUE( jsonText, path ) parses jsonText and extracts scalar values on the specified JavaScript-like path (see below for some JSON path examples)
    • JSON_QUERY( jsonText, path ) that parses jsonText and extracts objects or arrays on the specified JavaScript-like path (see below for some JSON path examples)

    These functions use JSON paths for referencing values or objects in JSON text. JSON paths use JavaScript-like syntax for referencing properties in JSON text. Some examples are:

    • '$' – references entire JSON object in the input text
    • '$.property1' – references property1 in JSON object
    • '$[4]' – references 5-th element in JSON array (indexes are counted from 0 like in JavaScript)
    • '$.property1.property2.array1[5].property3.array2[15].property4' – references complex nested property in the JSON object
    • '$.info. "first name"' – references "first name" property in info object. If key contains some special characters such as space, dollar, etc., it should be surrounded with double quotes

    Dollar sign ($) represents the input JSON object (similar to root “/” in XPath language). You can add any JavaScript-like property or array after “$” to reference properties in JSON object. One simple example of a query where these built-in functions are used is:

    SELECT Id, FirstName, LastName,

         JSON_VALUE(InfoJSON,'$.info.”social security number”')as SSN,
         JSON_QUERY(InfoJSON, '$.skills') as Skills
    FROM Person AS t
    WHERE ISJSON( InfoJSON ) > 0
    AND JSON_VALUE(InfoJSON,'$.Type') ='Student'

    This query returns first name and last name information from standard table columns, social security numbers, and an array of skills from the JSON column. Results are returned from rows where the InfoJSON cell contains a valid JSON and the type value in the JSON column is ‘Student.’ As you may notice, you can use JSON values in any part of the query, such as order by, group by, etc.

    Check out the other posts in this four-part series in the links below (as they become available), or learn more in the SQL Server 2016 blogging series.

    JSON in SQL Server 2016: Part 1 of 4


    0 0

    Here's a quick look at our “Top 10” most popular posts in 2015, based on audience views. During the year, the focus of this blog expanded from exclusively ML to broader data science and advanced analytics. Not surprisingly, major new product announcements figure prominently on this list, but a few other entries may perhaps be a bit more unexpected.

    With three posts tied for the 10th position, it's more like a Top 12 list - we hope you enjoy this recap:   

    10.
    What Types of Questions Can Data Science Answer, by Brandon Rohrer.
    Announcing the Availability of the Microsoft Data Science Virtual Machine, by Gopi Kumar.
    Announcing the Public Preview of Azure Data Catalog, by Joseph Sirosh.

    9.
    Build and Deploy a Predictive Web App Using Python and Azure ML, by Raymond Laghaeian.

    8.
    Choosing a Learning Algorithm in Azure ML, by Brandon Rohrer.

    7.
    Excel Add-in for Azure ML, by Ted Way.

    6.
    Introducing Jupyter Notebooks in Azure ML Studio, by Shahrokh Mortazavi.

    5.
    New edX Course: Data Science & Machine Learning Essentials, by Chirag Dhull.

    4.
    Microsoft Closes Acquisition of Revolution Analytics, by Joseph Sirosh.

    3.
    Announcing the General Availability of Azure Machine Learning, by Joseph Sirosh.

    2.
    Announcing the General Availability of Azure Data Factory, by Joseph Sirosh.

    1.
    Fun with ML, Stream Analytics and PowerBI – Observing Virality in Real Time, by Corom Thompson and Santosh Balasubramanian.


     

    ML Blog Team


    0 0

    Work Folders syncs files between client and server. Although most issues are discovered by users, it could be root caused on the server, the client or the network. This blog post shares the most common problems customers have reported, and some troubleshooting techniques on Windows devices.

    Setup

    When user setup Work Folders using Control panel app, any issues encountered will be shown in the UI. Some common issues are:

    • Work Folders path cannot be encrypted: If the admin requires the files to be encrypted on the client, Work Folders will try to encrypt the folder created. If the encryption fails, user will see the failure, and ask to use a different path. A few examples:

      • If the folder handle is opened, encryption will fail.

      • If the folder is on a USB drive, and the drive is not supporting encryption.

      • There is an existing Work Folders folder, and the folder is encrypted by other keys.

      • If the device is domain joined, you may also search (then fix) expired/revoked certificate in the “Default Domain Policy”, that can prevent encryption on the client.

    • Password enforcement failure: Password policy is also an admin configuration on the server, and enforced on the client. User must be an admin on the client machine to enforce the policy.

      • However, it is not common that user has local admin right for domain managed machines. To exempt password policy on domain devices, admin must configure the domains to be excluded, by using Set-SyncShare cmdlet, and specify PasswordAutolockExcludeDomain list. For example:

     Set-SyncShare <share name> -PasswordAutolockExcludeDomain <domain list>

      • Password enforcement is done by the using the EAS engine in Windows. It requires that user can change password on the device. In Windows 10, EAS engine has change such that all users (including local user accounts) on that device can change password. You can find more details here (note that the MailApp also uses the EAS engine to enforce password)

    • Access Denied:

      • Mirrored account: This usually happens in testing, when the device is connected to the corpnet, and logged on with a local account, and there is a domain account for the same user name as the local account. Windows may try to use NTLM to authenticate, and didn’t prompt for domain user credential (note, if you logged on as device local account, you should get prompt for domain credentials). In this case, setup will fail.

      • Windows 10 specific: This issue existed in some pre-release of Windows 10 and TH1, it is fixed in Windows 10 TH2 release. In some setup, the following regkey is missing: HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer\SyncRootManager. There is no good workaround for this, recommending to get the TH2 build which has the fix in it.

    Sync

    Both Encryption and Password enforcement error described above can happen during sync, if the admin turns on the policy after user already setup the Work Folders. On each sync, the client will check for policy change, and apply if necessary.

    For client errors, it’s always good to start with the message displayed in the Control panel. List below describe some common errors showing on the client:

    • Require credential: this is more common if the admin configured ADFS for authentication. The frequency for the user to re-enter credential is defined by the ADFS token lifetime. The configuration is in ADFS. On Windows 8.1 or below, if the device is WorkPlace joined, token lifetime is 7 days by default. On Windows 10, it extends to 42 days. For non-Workplace joined devices, token lifetime is 8 hours.

    • Key revoked: This happens when the encryption key was revoked by either the admin or user themselves. There are multiple ways can trigger the key revocation.

      • Admin chooses to wipe a device.

      • User removes the device from Intune management (or other MDM app if it is supported for key revocation)

      • User removes corporate email account on the device.

      • Work Folders is configured on an external drive, and the drive is connected to a different machine. The encryption key is tied to a device, when the folder is configured on one device, you can’t simply move it to another device to read it.

      • PC refresh: If the device is clean installed, the encryption key will be deleted. That will result the data unable to be decrypted.

    • Conflict files: When the same file is getting modified on different devices, at the next sync time, conflict file will be generated. Work Folders determines a winner file by the last write timestamp. The winner file keeps the file name; the loser file will get renamed by appending a device name to the file name, the device name indicates where the conflict was created. Some known examples:

      • If user has changed file on one device without closing it, the file will not sync to the server, user goes to another device change the file. When both files are closed then synced, there will be conflict.

      • IE favorites: IE changes the favorite links periodically, although there is not content change, sync will detect the change, and create conflict. In Windows 10, Work Folders has optimized this by comparing content. If the file is truly identical, it will not generate conflict.

      • Server data restore: if the server lost the sync metadata database, client and server will need to compare the file sets to determine what to sync. During this reconciliation process, any differences found between the client and the server will generate conflicts.

    • File types excluded from sync: Work Folders tries to optimize sync by excluding temp files and a few files specific to the device itself. The files which are excluded from sync: thumbs.db, desktop.ini and temp files (most temp files seen by Work Folders are from Office applications).

       

    Client upgrade

    Upgrade from Windows 7 to Windows 10, ensure the Windows 7 client has KB 3081954 is installed, otherwise, the device will lose the sync partnership to the server after upgrade. User will not be notified for any errors (since Work Folders will be shown as not installed on the device). If the user didn’t have the KB installed before the upgrade, he/she will need to re-configure Work Folders after upgrade.

    From Windows 8.1 to Windows 10 upgrade, if the upgrade is done using USMT, the Work Folders link in File Explorer may not work after the upgrade. To fix this, user needs to simply open the control panel -> Work Folders, this action triggers the service to reload the partnership, and fix the link of the Work Folders path in File Explorer.

    Event logs

    Work Folders event logs are stored under Applications and Services -> Microsoft -> Windows -> Work Folders. The logs under Operational folder should be examined. ManagementAgent logs are used to show notification center, which can be ignored.

    Traces

    If the problem is not covered in any of the above or resources below, you will need to contact Microsoft CSS, who can guide you to capture the debug traces for further investigation.

    Resources

    The Technet wiki is also getting updated periodically when issues are reported:

    http://social.technet.microsoft.com/wiki/contents/articles/tags/Work+Folders/default.aspx

    If you want to learn more about Work Folders, I’d recommend the list of the blogs:

    http://blogs.technet.com/b/filecab/archive/tags/work+folders/default.aspx

    There are also good technet articles on Work Folders here:

    https://technet.microsoft.com/en-us/library/dn265974.aspx


    0 0

    Summary: Easily identify which files have been saved in the PowerShell ISE.

    Hey, Scripting Guy! Question How can I see which of my open files in the Windows PowerShell ISE been saved?

    Hey, Scripting Guy! Answer To show a list of files that have been saved, run the following code in the ISE.

    $psISE.PowerShellTabs.files | where { $_.IsSaved } | Select-Object DisplayName

     


    0 0

    Does your organization use a web proxy server for securing outbound Internet access?

    Do you use third-party tools, such as Fiddler, that rely on HTTP/HTTPS proxying?

    If you’ve answered yes! to either of these questions, you’ll find that you’ll likely need to make a couple quick tweaks to your Azure PowerShell scripts for successful connectivity when communicating through a proxied connection.

    Read this article ...

    In this article, I’ll provide 4 simple lines of code that you can add to the beginning of any Azure PowerShell scripts when you need to work with a proxy-based connection to the Microsoft Azure cloud ...

    ...(read more)

    0 0

    In this podcast, we look at the important topic of how/why Windows 10 is changing (for the better!) the way we handle deployment and management in the enterprise.

    This is a topic I covered in depth in yesterday’s blog post, and this discussion expands on that overview.

    To dive into the entire "Windows 10 + EMS & ConfigMgr" series, visit aka.ms/DeployWin10.

     

    As noted yesterday, most people I talk with generally agree that your devices will be more secure, more reliable, and more compatible if you are keeping up with the updates we regularly release.  Even though they agree with this in principle, they still have concerns about whether or not their devices can handle all the updates without first rigorously verifying that the updates won’t break something. That process can, obviously, consume a ton of time. Some examples of devices in this type of scenario are PC’s that operate in truly mission-critical roles (e.g. operating and assembly line or in an operating room). These mission-critical use cases are very different from the typical Information Worker scenarios where the devices get used for a lot of different activities and can therefore be more flexible.

    In our mobile-first, cloud-first world, Information Workers expect (and, you could argue, insist) on having new value and new capabilities constantly flowing to them. Most of these workers have smart phones and regularly accept the updates to their apps from the various app stores. The iOS and Android ecosystems also release updates to the OS on a regular cadence.

    With this in mind, making updates isn’t abnormal, and we are committed to continuously rolling out new capabilities to users around the globe – but we also understand that there are use cases where this simply doesn’t make sense. Windows is unique in that it is used in an incredibly broad set of scenarios – from a simple phone to some of the most complex and mission critical use scenarios in factories and hospitals. One size (and one servicing model) does not fit all of these scenarios.

    To strike a balance between the needed updates for such a wide range of device types, there are four servicing options you will want to deeply understand.

    • Windows Insider Program
    • Current Branch (CB)
    • Current Branch for Business (CBB)
    • Long-Term Servicing Branch (LTSB)

    Read more about this on the post “Navigating the Windows 10 Servicing Options.”


    0 0

    (この記事は 2015 年11 月 19 日に Microsoft Partner Network Blog に掲載された記事 Talking Security: seizing the opportunityの翻訳です。最新情報についてはリンク元のページをご参照ください。)


     

    インターネットの利用者ならだれでも、デジタル セキュリティが常に重要な課題であると認識しているでしょう。ハッカーの活動が高度になり、ますます組織化され、資金源が増えただけではなく、ビジネス ユーザーがモバイル化に伴い私物のデバイスを仕事に活用するようになったことで、あらゆる場所でセキュリティの必要性が増しています。このようにセキュリティは顧客企業にとって大きな課題である一方、パートナー様にはビジネス チャンスをもたらします。
    7 月の Worldwide Partner Conference で行われた Brad Smith の基調講演 (英語)に刺激を受け、私はパートナー様の顧客にとってセキュリティがどのような意味を持つのかについて改めて考えました。顧客はセキュリティ ソリューションを求めていますが、それ以上に、信頼できるソリューション プロバイダーを求めています。さらに、クラウド内の企業コンテンツへのアクセスに関して透明性と管理性を求めています。
    CEO の Satya Nadella のスピーチや、Brad Smith らが公開しているブログ記事からもわかるように、マイクロソフトにとってセキュリティは単なる優先事項ではなく、喫緊の課題です。マイクロソフトは、パートナー様が最善のサービスを提供できるように、顧客がさらに細かく企業データを管理できる新機能を導入しました。

     

    顧客が抱える課題
    マルウェア対策: 顧客企業では、従業員の個人所有デバイスも含め、多くのデバイスを使用しています。そのすべてのデバイスからマルウェアをシャットアウトしようとすることは IT チームにとって至難の業です。ウイルスやスパイウェアは常に変化し、ネットワーク経由で新たな脅威が絶え間なく出現します。ハッカーよりも先回りして、生産性、共同作業、アクセスを維持するのは、並たいていのことではありません。


    企業の ID とデータ アク御:セスの制御:組織では、機密データに適切なユーザーのみがアクセスできるように制限することが重要です。また、従業員の離職時に権限を取り消したり、PC の紛失時や盗難時にデータを安全に保護したりする必要があります。


    リモート接続のセキュリティ保護:今日、人々はあらゆる場所で仕事をこなしています。訪問先、自宅、カフェなども例外ではありません。さまざまなデバイス間、ユーザー間、システム間できわめて多くの機密情報がやり取りされています。セキュリティ対策の不十分なネットワークを利用する場合もあり、保管中および転送中のデータの保護が不可欠です。

     

    顧客との対話の促進
    パートナー様のソリューションによってリスクをいかに防げるかを顧客に理解してもらうには、まずリスクそのものについての理解を促すことが鍵になります。そのための重要な話し合いに向けて準備を整えられるように、マイクロソフトはパートナー様をご支援します。


    ビジネス全般に言えることですが、セキュリティに関して、すべての企業に適した万能なソリューションはありません。その代わり、顧客の脆弱性評価を基に、独自の最適なソリューションを構築することはできます。ModernBiz キャンペーンのページの「ビジネスの保護」セクションで、顧客との話し合いに役立つさまざまなリソースを提供しています。

     

    パートナー様のビジネス チャンス
    顧客のニーズを把握できたら、顧客に最適な保護機能を備え、最も安心を与えられるソリューションを見極めなくてはなりません。


    マルウェア対策: Office 365 と Azure は業界で最も高度なマルウェア対策機能を備えており、更新プログラムの適用は自動化されています。このマルウェア対策を Windows 10 の新しいセキュリティ機能と組み合わせることで、業界トップ レベルの包括的な保護環境を実現できます。


    • Windows 10 では Windows Defender と Windows ファイアウォールがアップデートされ、不要なソフトウェアをスキャンして隔離する機能がさらに強化されました。Windows 10 の強力なセキュリティの詳細については、ModernBiz ページの「Safeguard your business」のプレゼンテーション (英語)を参照してください。
    Device Guardはトラスト ブートと連動し、信頼できるソフトウェアのみをダウンロード可能にします。
    Credential Guard を使用すると、実行中のオペレーティング システムから切り離されたハードウェア ベースの環境で資格情報を保護できます。
    • さらに、マイクロソフトのクラウド & エンタープライズ部門は、Microsoft Intune の機能拡張など、顧客企業の従業員に不可欠な、アプリとデバイスを管理するための各種セキュリティ ソリューションの提供に取り組んでいます。Azure セキュリティ センターを通じて可視性を高めることで、短時間で高度な脅威を検知して、的確な措置を講じられるようになりました。

     

    適切な ID とアクセスの確立
    • Microsoft Passport と Windows Hello は Windows 10 が備える多要素認証の機能です。より安全かつ便利なパスワードの代替手段を見つけるという、マイクロソフトのミッションに取り組む一環として開発されました。Microsoft Passport の柔軟な 2 要素 ID では、デバイスのほかに生体認証または PIN を使用します。Windows Hello では、指紋や虹彩のスキャンといった生体認証を使用してデバイスのロックを解除します。
    • 間もなくリリース予定の Enterprise Data Protection では、企業データと個人データを切り離してコンテナー化することにより、企業データを保護します。現在、多くの顧客企業にてこの機能のテストを実施している最中であり、近いうちに Windows Insider Program ご参加の皆様に公開予定です。
    • クラウドに移行するアプリケーションの増加に伴い、企業データの保護の必要性も高まっています。そこで、先日新たにクラウド アクセス セキュリティの販売会社である Adallom を買収しました。これにより、アプリケーションへのアクセスに関する可視性と管理性を維持していくことが可能になります。

    セキュアなリモート接続:エスプレッソを飲むならカフェは絶好の場所ですが、セキュアなインターネット アクセスとなると最適とは言えません。一方、外出先でデータとビジネス アプリケーションに接続しなければならない従業員は少なくありません。企業は、組織の情報とシステムを保護しながら、アクセス手段を確保する必要があります。Windows 10 では VPN ソリューションが強化され、セキュリティへの配慮とリモート ワーカーの生産性とのバランスを企業側で調整できるようになりました。
    さらに、SQL 2016 の Always Encrypted テクノロジを使用し、保管中や転送中のデータを保護することで、顧客が求める包括的なセキュリティ ソリューションを提供できるようになります。

     

    ビジネス チャンスの到来
    Ponemon Institute (英語) は最近、11 か国 350 社を対象としたアンケート調査の結果を公開しました。いずれの企業も過去 1 年以内にセキュリティ侵害の被害にあっています。ビジネス上の機会損失、顧客離れの拡大、新規顧客の開拓コスト、およびブランド評価の低下などをすべて含めると、データ侵害による損失は平均で 379 万ドル* にのぼります。
    セキュリティ脅威への対策が、これまで以上に急務となっていることは明らかです。そして当然ながら、顧客もソリューションを必要としています。適切なソリューションを提供し、顧客の課題解消をサポートすることで、信頼できるアドバイザーとしての立場をさらに確実なものとしてください。
    * CSO Online (英語)、2015 年 5 月 27 日:「Ponemon: Data breach costs now average $154 per record (Ponemon レポート: レコードあたりのデータ侵害のコストは平均 154 ドルに)」

     


    0 0

    Here we are! It is almost time. Over 16 months ago, Microsoft announced that support for legacy versions of Internet Explorer would be ending on January 12th, 2016 (http://blogs.msdn.com/b/ie/archive/2014/08/07/stay-up-to-date-with-internet-explorer.aspx.) The hour is almost upon us. In addition to the announcement, technologies including Enterprise Mode, Compatibility View, and persistent emulation modes were added\enhanced to assist customers in bringing older sites and web applications over to remove deployment blockers to IE11 and ultimately, Windows 10. Most of our customers in the enterprise have already leveraged (or are currently in the process of leveraging) these technologies.

    If you are still running on older versions, you will soon notice that there is a warning message that will start appearing. In December, Microsoft published an article (https://support.microsoft.com/en-us/kb/3123303) that lays out the details of a new "End of Life" upgrade notification for Internet Explorer, which will be shipped as an update next week on January 12th.

    The update will apply to Windows 7 SP1 and Windows Server 2008 R2 for users who have not upgraded to Internet Explorer 11 (i.e. IE8, IE9, and IE10 users). The update includes a new “end of support” notification feature when the browser is launched. This will automatically open a new tab with the appropriate download page (http://windows.microsoft.com/en-us/internet-explorer/download-ie) for your particular operating system.

    For those enterprise customers that are still in the process of deploying and migration to Internet Explorer 11 (or have arranged for a custom support agreement) the KB article mentioned above also lays out instructions for disabling the notifications.

    For those customers that are still on Windows Vista and Windows 2008 (which are in extended support and do not support IE11) – those operating systems will not be affected by the update. IE9 is still the latest version of Internet Explorer supported by these operating systems. Windows 8 and Windows 8.1 are also unaffected (support for Windows 8 will end on January 12th and Windows 8.1 comes with IE11).

    The notification tab will not appear on every launch of the browser. After the tab is closed it will be 72 hours before it is shown again and only when launching IE (i.e. not during a browsing session).

    For more information about the end of support for old versions of Internet Explorer see the following links: https://www.microsoft.com/en-us/WindowsForBusiness/End-of-IE-support and https://support.microsoft.com/en-us/lifecycle#gp/Microsoft-Internet-Explorer page. For technical information about how to upgrade to Internet Explorer 11 and Microsoft Edge see the Browser TechCenter pages on TechNet (https://technet.microsoft.com/en-us/browser.)


    0 0

    In this article I'll continue the story about COSN Platform - an ultimate solution for service providers to provide Azure-like services, hosted in a local datacenter. First post is available here.

    High Availability of the COSN Platform

    COSN Platform was designed to be highly available, cost optimized and scalable. It consists of several major components, and all these components need to be highly available. The overall COSN Platform stability is based on a several redundancy levels.

    Server hardware redundancy

    We recommend that every server must be equipped with a pair of redundant power supplies (connected to separate power lines), ECC memory and redundant fans. Well-known server hardware brands are preferred, with a fast replace of failed components.

    Storage redundancy

    It you with to use traditional storage, than we recommend to use at least a storage with a pair of redundant controllers, connected to hosts via the pairs of cables with MPIO technology. RAID10 is preferred for storing the VMs, RAID5/6 is preferred to store backups and library items, with an available hot spare of disk. This method protects from one controller failure (including its power supply failure), Ethernet\FC cable failure and disk drive failures.

    If you don't have an available storage system, and you are on a stage of planning a storage solution for COSN Platform, then it's a good idea to use Microsoft Software-defined storage. It is redundant by default.

    With any approach that you will choose, remember that storage is the core of the whole COSN Platform, and it's availability is super critical for the tenants.

    Network redundancy

    Network connectivity is the core for the COSN Platform. So you definitely need to use redundant switches and redundant cable connections between hosts and switches. NIC Teaming is the right approach for the connectivity between Hyper-V hosts and ToR Switches, and MPIO is the approach for iSCSI-based storage system connectivity. Dedicated network adapter for management and monitoring is recommended.

    Hyper-V Host redundancy

    Any server can fail, nobody is protected from this. So using Hyper-V clustering is required to ensure that all working VMs will restart on a healthy host if a current host will fail. Also it allows to reboot any host in a cluster without downtime, because all running VMs will be seamlessly migrated to another hosts in the cluster using Live Migration technology.

    I recommend to use 3 different Hyper-V clusters:

    1. Management cluster. It is used to run all management VMs and VMs with internal services - AD, VMM, WAP etc.
    2. Compute cluster. It is used to run all tenant VMs. Hyper-V 2012 R2 has a limit of 64 hosts in a cluster. So for a big environment you may need several compute clusters. Also you may need several compute clusters if you use different types of server hardware for tenant workloads.
    3. Network cluster. This is a small, but very important cluster - is hosts NVGRE Network gateway VMs. It you use traditional VLANs and don't plan to use NVGRE, then you don’t need it. Usually Network cluster consists of 2-3 hosts with 2+ VMs with RRaS installed. You need to separate it from Compute cluster because you can't run VMs with NVGRE and Network Gateways on the same host (it is not supported). Also it's not a good idea to combine Network Cluster with Management Cluster from a logical perspective, because it's better to separate management VMs and VMs that are needed for tenant VMs to function (if management cluster fails, Azure Pack won't work, but at least tenant VMs will be able to access internet and communicate with each other).

    Management components redundancy

    Even if you use Hyper-V cluster for all management VMs, you still can have a significant downtime, because VMs need some time to restart on a healthy host after a current host failure. Also some operations can corrupt during this process (SQL Server transactions, for example). That why we recommend to use failover and high availability technologies for all management stack components to add another level of redundancy.

    Different management stack components of COSN Platform use different methods for high availability. I can combine them into 4 types:

    1. Failover Clustering. This type of high availability is based on Windows Server Failover Clustering component.
    2. Network Load Balancing. High availability based on network traffic load balancing method. It is used for stateless services. You can use either free Microsoft NLB or commercial load balancer - Citrix NetScaler, Kemp LoadMaster or similar.
    3. Service specific. High availability that uses technologies, which are specific for this service. For example, if you need to have a highly available Active Directory Domain Services, you need to deploy an additional domain controller, configure replication and reconfigure DNS on member servers. It doesn't use Failover Clustering or NLB for high availability.

    Some services require a backend based on SQL Server. You can choose between several available high availability options for SQL Server, such as AlwaysOn Failover Clustering or AlwaysOn Availability Groups.

    Here is the list of main COSN Platform components with a high availability technologies:

    Component

    High Availability Technology

    Active Directory Domain Services

    Service Specific

    VMM

    Failover Clustering + Highly Available Backend

    SPF

    NLB + Highly Available Backend

    Windows Azure Pack components

    NLB + Highly Available Backend

    SMA

    NLB + Highly Available Backend

    Network Gateway (RRaS)

    Failover Clustering

    SCOM

    Service Specific + Highly Available Backend

    As you see, you can achieve a multi-level redundancy with COSN Platform to be aligned with modern world 24/7/365 requirements. We give you flexible options, and you can choose the level of high availability that suits your customer needs.

    Microsoft Software-defined Storage in details

    Recommended option for COSN Platform storage is to use Microsoft Software-defined Storage. I've mentioned it before in a previous post.

    It is based on Windows Server 2012 R2 Storage Spaces and Scale-out File Servers (SOFS). With this approach you need to have 2+ JBODs with SAS disks, connected to 2+ Scale-Out File Servers.

    Each JBOD enclosure is connected to each Scale-out File Server by a SAS cable. Usually JBOD enclosures have at least 2 external SAS ports, so if you have only 2 SOFSs, then you can connect JBODs to SOFSs directly. If you'll have a bigger deployment with 4+ JBODs, you'll need more SOFSs, and you'll need a pair of SAS switches to connect all components together.

    For storing VMs, we recommend to use Mirrored Storage Spaces, with SAS SSD (20%) and SAS NL disk drivers (80%). This ratio is optimal for Storage Tiering. Such storage will be fast, cost efficient and high capacity. For storing backups and library items, you can use Parity Storage Spaces with SAS NL disk drives. SATA drives are not supported in this configuration, because SATA drives can connect only to 1 server at a time, and we have 2 SOFSs.

    We also recommend to use RDMA technology to dramatically increase the performance of network connectivity between Scale-out File Servers and Hyper-V hosts. It is called SMB Direct. Special network cards are required but this, but it's definitely worth it, because it eliminates a bottleneck regarding network throughput and latency between SOFSs and Hyper-V hosts.

    There are 2 different protocols for RDMA:

    1. iWARP. It is switch independent and supports routing of RDMA traffic. For example, Chelsio network cards support iWARP.
    2. RoCe. It is not switch independent, so you also need compatible ToR switches. Additional configuration of swithes is required. Traffic routing is not available, but it is not usually needed in COSN Platform. For example, Mellanox network cards and switches can use RoCe with COSN Platform. Mellanox has published benchmark results regarding RoCe performance advantage compared to iWARP.

    2 JBODs with Mirrored Storage Spaces + 2 Scale-Out File Servers configuration is redundant to the whole JBOD enclosure failure, disk drive failures, SAS cable failures and one SOFS failure. So no single point of failure here, everything is redundant.

    As an evolution of Microsoft Software Defined storage approach, Windows Server 2016 Datacenter will support a new technology called "Storage Spaces Direct". This technology allows to build a storage based on local disks, installed into servers. Data is being replicated across several servers via LAN.

    I think it will be a good approach for backups, library items and other non-performance sensitive workloads. But traditional Storage Spaces approach based on JBODs and SOFSs will still be the king for storing VMs because of the performance\availability\cost-efficiency balance.

    Deployment of COSN Platform

    As you may see, COSN Platform is a complex thing, if you need to deploy it in production with an additional level of redundancy and high availability.

    Microsoft provides you with several deployment options:

    1. Manual installation
    2. PowerShell Deployment Toolkit (PDT)
    3. Service Provider Operational Readiness Kit (SPORK)

    Manual Installation

    This is obvious - read 100+ technet pages, install each components using its documentation, configure everything and you are done. This is a complex and long way, but we recommend you to pass it to have a full understanding how COSN Platform works. Otherwise it will be hard for you to operate the black box. I recommend you to read documents about IaaS Product Line Architecture, they also include detailed instructions about the fabric deployment.

    This is an overview of a manual installation:

    1. Plan everything. This is super important.
    2. Connect all cables
    3. Deploy new AD or use existing
    4. Configure storage
    5. Install Hyper-V on hosts
    6. Create a Hyper-V management cluster and connect shared storage
    7. Deploy highly available SQL Server for management databases
    8. Install VMM and VMM Library
    9. Install SPF
    10. Install WAP
    11. Install SMA, WSUS, WDS and other additional components
    12. Configure networking in VMM
    13. Deploy Hyper-V network cluster
    14. Deploy network gateways
    15. Deploy RDS services for console access to VMs in WAP
    16. Create Clouds in VMM
    17. Connect VMM Clouds to WAP
    18. Configure WAP
    19. Create VM Templates
    20. Deploy additional WAP Services if needed
    21. Publish WAP Tenant Portal, Tenant Authentication and Public Tenant API and other required services to the Internet.

    It may look easy, but it can take you several weeks to deploy. But it worth it because you'll understand how COSN Platform works from the inside.

    PowerShell Deployment Toolkit

    PowerShell Deployment Toolkit (PDT) is a set of PowerShell-based scripts which can automatically deploy System Center and Windows Azure Pack components using the architecture that you describe in a special XML file.

    PDT speeds up the COSN Platform installation process, but it doesn't eliminate planning and configuring steps. You still need to understand what you are deploying and how. There is a special GUI tool that can help you to create an XML file - PDTGui. Check this video from TechEd to learn more details about PDT.

    Here is an overview of COSN Platform installation using PDT:

    1. Plan, plan, plan!
    2. Create variables.xml file for your architecture
    3. Connect all cables
    4. Deploy new AD or use existing
    5. Configure storage
    6. Install Hyper-V on hosts
    7. Create a Hyper-V management cluster and connect shared storage
    8. Run PDT to install COSN Platform components
    9. Configure networking in VMM
    10. Deploy Hyper-V network cluster
    11. Deploy network gateways
    12. Create Clouds in VMM
    13. Connect VMM Clouds to WAP
    14. Configure WAP
    15. Create VM Templates
    16. Deploy additional WAP Services if needed
    17. Publish WAP Tenant Portal, Tenant Authentication and Public Tenant API and other required services to the Internet.

    As you see, this process is a little bit easier and it can reduce the deployment time from several weeks to a week or two.

    Service Provider Operational Readiness Kit

    Service Provider Operational Readiness Kit (SPORK) is special tool for service providers, that helps you with creating PDT configuration files. It generates PowerShell scripts based on PDT and creates all configuration files for you. It has several built-in ready to use architecture templates:

    1. Proof of Concept (POC) templates - use them if you need to deploy COSN Platform quickly for testing or demo purposes without high availability (non-production)
    2. Product Line Architecture (PLA) templates - production-ready templates, based on IaaS PLA, that I've already mentioned before.

    Check the video demonstration here.

    SPORK, comparing to PDT, simplifies planning process, because you need to use an built-in architecture. Also it has GUI and you don't need additional tools if you are not a fan of command line interface. Main advantage of SPORK is that it does a lot of tasks for you, that minimizes the human error. Warning: SPORK is not publicly available for download, and you need to request your Microsoft representative to provide you package with software bits and documentation.

    My recommend for service providers: read the documentation, deploy COSN Platform manually in the lab, and then re-deploy it in production using SPORK. You'll have experience to operate COSN Platform, and you'll be sure they it deployed correctly in production.

    Server Core or Server with GUI?

    As you maybe already know, there are 4 GUI levels in Windows Server 2012 R2: Server Core (no GUI), minimal Server Interface, Server with GUI and Desktop Experience (Windows8-like GUI).

    I recommend to use Server Core anywhere you can, because it minimizes the amount of updates you need install, it requires less RAM and disk space, and it is more secure because the attack area is smaller than in Server with GUI mode. Server Core is supported for all COSN Platform components except Active Directory Federation Services and SCOM Data Warehouse VMs. On all other VMs it is better to use Server Core.

    If you're afraid of Server Core mode, that you can install Windows Server with a GUI, configure everything and then disable GUI via Server Manager. It will require just a restart. Also it's a good idea to deploy a dedicated VM for management with "Server with a GUI" option, and install all Windows Server and System Center management tools and consoles on it. All COSN Platform administrators will be able to manage all platform components from one place, with Role-based access policies applied.

    Reference architecture

    In October 2014 Microsoft and Dell launched a cool think together called "Cloud Platform System" (CPS). This is in-a-box solution, than combines Dell hardware with Microsoft software. You just buy from one to four racks, fully loaded with Dell hardware, and Microsoft engineers install COSN Platform on top of it. You have one point of contact for support, the solution is fully tested by Dell and Microsoft. Check this video for details.

    In a year CPS lineup was extended with a smaller solution. Old CPS is now called "CPS Premium", and new light version is named "CPS Standard".

    CPS Premium is not available everywhere, and because more than a year has passed, some of its components are now outdated. But I usually use CPS Premium architecture as a reference. You can deploy COSN Platform on a similar non-Dell hardware, but using the same architecture principals. So if you are on a stage of deployment, spend some time and learn how CPS Premium is made from the inside.

    Future of COSN Platform

    With an announcement of Azure Stack last year, I often receive questions about the future of current COSN Platform, based on Azure Pack.

    First of all I want to clarify that COSN Platform and Azure Pack are not dead. With a release of Azure Stack this year, Microsoft will have 2 separate offers for companies, who want to deploy "Azure it their own datacenter":

    1. More Azure-consistent and less flexible platform, based on Azure Stack, which share bits with Azure Resource Manager and new Azure portal (which is currently in preview).
    2. Less Azure-consistent, but more flexible platform, based on Azure Pack. It share some bits with current Azure portal in terms of UI, but is based on Windows Server and System Center technologies, some of which are not available in Microsoft Azure.

    Both offers have strong and weak points, because Azure-consistency is cool, but not for everyone. COSN Platform has some cool technologies, that are very valuable for service providers, but not available in Azure. Some of them:

    1. Generation 2 VMs without legacy virtual hardware (more resource efficient than Generation 1 VMs)
    2. VHDX disks up to 64 Tb
    3. Console access to VMs
    4. Several network adapters
    5. Dynamic memory
    6. Shielded VMs

    So current COSN Platform has it strong points for customers against Microsoft Azure (and Azure Stack). COSN Platform is flexible - you can use any available storage system, you can use traditional VLAN-based isolation of tenant networks etc. So right now the future of COSN Platform looks like this:

    • COSN Platform will support Windows Server 2016 and System Center 2016. New cool features like Shielded VMs and Storage Spaces Direct can already be implemented in COSN Platform using preview versions of these products.
    • Azure Pack continues to evolve. Recently multiple external IPs functionality was added with Update Rollup 8. New functions and fixes will be added with every Update Rollup that will be released during next years.

    I thing that with a release of Windows Server 2016, System Center 2016 and Azure Stack until the end of next year the ultimate solution will be to deploy current COSN Platform on 2016 versions for classic IaaS, Desktop-as-a-Service and Database-as-a-Service, and deploy an additional environment on Azure Stack for modern Azure-consistent IaaS and PaaS services. Anyway - don't wait and start your journey to the Microsoft Hybrid Cloud using COSN Platform today!

    That's all for today. Thank you for reading, I hope it was valuable for you. During several next weeks I plan to publish new blogposts about Azure Pack tenant experience and functionality extensions, lessons learned from worldwide COSN Platform deployments, how-to guides about Azure and COSN Platform integrations and more. So stay tuned!


    0 0

    One of my favorite features of Azure is Azure RemoteApp. For those of you unaware of Azure RemoteApp, it is Microsoft RemoteApp backed by Remote Desktop Services in Azure. This service provides remote secure access to applications from different devices; users can access their applications on their devices while you manage the data and access via Azure. Your data can be safely stored in Azure or on premise. In this step-by-step, I will demonstrate how to configure Azure RemoteApp with a backend...(read more)

    0 0

    このポストは、12 月 21 日に投稿された Microsoft Azure Stack: Hardware requirements の翻訳です。 著者: Jeffrey Snover (マイクロソフト、テクニカル フェロー) 今回は、年明けに Azure Stack Technical Preview のデプロイを計画しているお客様向けにいくつか情報をお伝えしたいと思います。 マイクロソフトは 2015 年の Ignite カンファレンス (英語) 以降、「 Azure をお客様のデータセンターに 」というビジョンの実現に向けて全力で取り組みを行っています。その一環として今回皆様にお伝えするのは、Azure Stack Technical Preview のハードウェア要件についてです。わかりやすい説明ビデオをご用意しましたので、どうぞご覧ください。 (Please visit the site to view this video) マイクロソフトは、概念実証 (POC) 環境としてインスタンス化された単一のサーバーで Azure Stack Technical...(read more)

    0 0

    V listopadu jsme vypublikovali první část seriálu s názvem “Microsoft Azure Backup Server”. Nyní přinášíme část druhou. Nastavení protekce – zálohování V rámci management konzole přejděte na záložku „ Protection “ a z menu vyberte volbu „ New “. Spustí se průvodce vytvořením tzv. Protection groups . Což je velmi zjednodušeno...(read more)

    0 0

    Summary: Sean Kearney shows you how to get sample code in the Windows PowerShell ISE.

    Honorary Scripting Guy, Sean Kearney, is here today to show you a really cool feature that has been in Windows PowerShell ISE for a while, but you might have overlooked it! It’s called Snippets.

      Note   This is Part 4 in a five-part series. Also see:

    I love using the Snippets feature on a regular basis, because although I can work in PowerShell, I don’t memorize code. I’m an IT pro, not a dev. Even my good developer friends don’t memorize everything.

    The Snippets feature was actually introduced in Windows PowerShell ISE 3.0, and I’ve always found it to be an invaluable tool.

    Press CTRL+J on the keyboard to pull up the main list of Snippets.

    Image of menu

    As you can see, there is a large pile of prebuilt code examples in the PowerShell ISE. If you’d like to use one, simply select it from the list. You can use the arrow keys to navigate up and down the list, or you can use your mouse.

    Clicking an example immediately pastes it into the active window, whether that is the Script Panel or the Console view in the ISE. Here is an example of the do-while loop in the Snippet list and pasted into the editing window:

    Image of command

    The cool part about the Snippets is that although Microsoft provides a strong rudimentary list, you can also add to it! Adding your custom snippets requires an example stored as a here-string and the New-ISESnippet cmdlet.

    In the following example, I am going to add a sample to remind myself how to define a string array, including some comments I should remember to edit. Here is the original code in PowerShell:

    # My Example String

    #

    # Some string array with a description

    [string[]]$SampleValue=@()

    …and here is what was required to add it as a personal Snippet in my PowerShell ISE.

    $Text=@"

    # My Example String

    #

    # Some string array with a description

    [string[]]$SampleValue=@()

    "@

    $Title=”Array Sample”

    $Description=”How to create a simple String Array including Comments”

    $Author=”Sean Kearney, Honorary Scripting Guy”

    New-ISESnippet –Text $Text –Title $Title –Description $Description –Author $Autho 

    If I execute this code and press CTRL+J immediately to see the list of Snippets, I can see that this new Snippet was added:

    Image of command

    You’ll also notice that the new Snippet is an XML file within the user’s personal PowerShell folder under a special folder called Snippets. A list of all personal Snippets for a user are within this folder.

    If you back up this folder, you can easily transfer any new Snippet examples between computers. As you reload your ISE each time, the new Snippets are immediately available to you.

    Of course, if you’ve decided you don’t like a particular example, you can easily remove it from the system by deleting its XML file or moving it out of the PowerShell Snippets folder.

    You can also (if you feel so inclined) edit the sample Snippet in the XML file. Here is the Snippet I just created. You can see the sample code within the block marked <Code> and </Code 

    Image of script

    You can simple edit this file within any standard text editor, such as Notepad.

    Sure, Snippets is an older feature, but it’s one of those nice things that makes the built-in ISE nice to use for an IT professional.

    Tomorrow, I’ll wrap up this week about features I love to use in the Windows PowerShell ISE. I’ll be talking about remote text file editing with the PSEdit cmdlet.

    I invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to them at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, always remember that with great PowerShell comes great responsibility.

    Sean Kearney, Honorary Scripting Guy, Cloud and Datacenter Management MVP


    0 0

    こんにちは。Windows プラットフォーム サポートの加藤です。

    本日は、Azure 上でフェールオーバー クラスターを構築する際の留意事項についてご案内します。
    最近、Azure 上の仮想マシンで構築したフェールオーバー クラスターがハートビート通信のダウンでフェールオーバーが発生する報告がございました。

    ハートビート通信とは、ノード間で実施される通信で、目的はノード間ネットワークの死活監視です。
    全てのハートビート通信がダウンすると相手ノードが停止したと判断します。
    フェールオーバー クラスターのハートビート通信のタイムアウトの閾値の既定値は 5 秒であり、これを超えてもハートビート通信ができない場合には、相手ノードのダウンと判断され、
    停止したノードがアプリケーションのオーナーノードであった場合には、そのアプリケーションは他のノードへフェールオーバーされます。

    - 参考
    フェールオーバー クラスターのハートビートについて
    http://blogs.technet.com/b/askcorejp/archive/2012/03/22/3488080.aspx

    Azure 環境では、後述の計画的なメンテナンス時に 30 秒ほど仮想マシンが一時停止する場合がございます。
    30 秒間クラスター ノードが停止すると、もう一方のクラスター ノードは、一時停止しているノードとハートビート通信が実施できない状況に陥り、上述の事象が発生します。

    ========================================
    Azure Virtual Machines に対する計画的なメンテナンス
    https://azure.microsoft.com/ja-jp/documentation/articles/virtual-machines-planned-maintenance/
    該当箇所を抜粋します。

    Microsoft Azure の更新のクラスの場合、実行中の仮想マシンに何らかの影響があってもお客様にはわかりません。これらの更新の多くは、実行中のインスタンスに干渉することがなく更新可能なコンポーネントまたはサービスを対象にしています。これらの更新の一部は、ホスト オペレーティング システムのプラットフォーム インフラストラクチャに対する更新であり、仮想マシンの完全な再起動を必要とせずに適用できます。

    これらの更新は、ライブ移行 ("メモリ保護" 更新) を実現するテクノロジによって達成されます。更新時、仮想マシンは "一時停止" 状態になり、RAM 内のメモリは保護されます。この状態で、基礎となるホスト オペレーティング システムが必要な更新プログラムと修正プログラムを受信します。仮想マシンは、30 秒以内の一時停止で再開されます。再開後、仮想マシンのクロックは自動的に同期されます。

    このメカニズムを使用してすべての更新をデプロイできるわけではありませんが、一時停止の期間が短い場合、この方法で更新をデプロイすることにより、仮想マシンへの影響が大幅に軽減されます。

    複数インスタンスの更新 (可用性セット内の仮想マシンの場合) が、一度に 1 つの更新ドメインに適用されます。
    ========================================

    そのため、Azure 環境でクラスターを構築する場合には、このハートビートの閾値を事前に延長しておくことをお勧めいたします。
    通常は、仮想マシンは 30 秒以内の一時停止で再開されるため、31 秒以上に延長していただくことをお勧めします。

    なお Windows Server 2008 R2 のクラスターでは、同じサブネット間のハートビート通信は 20 秒までしか延長できないため、最大値である 20 秒まで延長をお願いします。
    ※ Windows Server 2012 以降のクラスターでは、240  秒まで延長可能です。

    - 参考
    クラスターのハートビート通信の設定値の範囲について
    http://blogs.technet.com/b/askcorejp/archive/2015/08/12/3653013.aspx

    ハートビートの閾値の変更手順
    ========================================
    WSFC 環境のハートビートの既定の設定は以下となります。

    ハートビート間隔 : 1 秒
    切断検知までのしきい値 : 5 回

    1 秒間隔にハートビート パケットを送付し、このパケットが連続して 5 回失敗するとネットワークが切断されたと判断し、「パーティション分割」の状態となります。ネットワークが不安定な環境では、上記ハートビートの設定を 1 秒間隔で 5 回ではなく、2 秒間隔で 10 回まで、などに変更する事によってこのネットワークの問題に対応できる場合があります。

    この設定値は、以下のクラスター プロパティ値として管理されています。
    •クラスターが同じサブネット構成されている場合

    ◦SameSubnetDelay (単位:ミリ秒)
    ◦SameSubnetThreshold (単位:回数)

    •クラスターが異なるサブネットで構成されている場合

    ◦CrossSubnetDelay (単位:ミリ秒)
    ◦CrossSubnetThreshold (単位:回数)

    上記プロパティ値の変更手順は OS のバージョンによってことなります。

    Windows Server 2008 R2 のクラスター
    ----------------------------------------------------------------
    1.クラスタ ノードにてコマンド プロンプトを開きます。
    2.以下のコマンドを実行すると、現在の設定値が確認できます

    cluster /prop:SameSubnetDelay
    cluster /prop:SameSubnetThreshold

    3.以下のコマンドを実行し、設定を既定の状態から変更します( 2 秒 x 10 回 の場合)

    cluster /prop SameSubnetDelay=2000
    cluster /prop SameSubnetThreshold=10

    4.再度 2 のコマンドを実行し、変更が反映されていることを確認します。

    この設定は 1 台のノードで実行することによりクラスター全体に即時に反映されます

    Windows Server 2012 以降のクラスター
    ----------------------------------------------------------------
    1) クラスタ ノードにて PowerShell コンソールを開きます。
    2) 以下のコマンドレットを実行すると、現在の設定値が確認できます。

    Get-Cluster | select SameSubnet*
    Get-Cluster | select CrossSubnet*

    3) 以下のコマンドレットを実行し、設定を既定の状態から変更します( 2 秒 x 10 回 の場合)

    (Get-Cluster).SameSubnetDelay = 2000
    (Get-Cluster).SameSubnetThreshold = 10

    (Get-Cluster).CrossSubnetDelay = 2000
    (Get-Cluster).CrossSubnetThreshold = 10


    4.再度 2 のコマンドレットを実行し、変更が反映されていることを確認します。

    この設定は 1 台のノードで実行することによりクラスター全体に即時に反映されます

    このブログが皆様のお役に立てれば幸いです。

    ※ 本ドキュメントは2016年1月6日時点の情報をもとにしています。将来的には変更となる可能性がございますので、ご了承ください


    0 0

    Čím dál více škol v dnešní době přechází k formě předplatného Office365 ProPlus jako způsobu pořízení licencí aplikací Office. Díky tomu, že se ale jedná z pohledu instalace o technologicky trochu jiný produkt, než je např. běžná sada Office 2016, je vhodné si tyto rozdíly a především správný způsob vytvoření školní image, představit.

    Vzhledem k tomu, že školy mají již běžné desítky a někdy i stovek počítačů a tabletů, pro jejich instalaci využívají tzv. imaging, tj. vytvoření jednoho referenčního počítače, vytvoření systémové image a její následné nasazení na ostatní počítače. Tento postup je také oblíbený z toho důvodu, že mnoho škol využívá prostředí Active Directory s Windows Serverem, jehož součástí je kompletní řešení pro imaging. Nazývá se Windows Deployment Services, česky Služba pro nasazení systému Windows.

    Jak nasadit systém Windows ve školním prostředí najdete podrobně např. v tomto kurzu v rámci Microsoft Virtual Academy, my se teď ale podíváme na správné začlenění Office 365 do školní image.
    Postup je tedy následovný:
     
    1. Nainstalujte na referenční počítač požadovaný operační systém, nainstalujte všechny aktualizace a případně další aplikace, které chcete, aby byly součástí školní image.
     
    2. Nainstalujte Office 365 ProPlus pomocí nástroje Office Deployment Tool. Nástroj ODT existuje ve dvou verzích, pro verzi Office 2013 a Office 2016. Vyberte si tedy tu správnou dle verze Office, kterou se chystáte nasadit. ODT je možné si stáhnout na následujících odkazech:

    Office Deployment Tool (Office 2013 verze)
    Office Deployment Tool (Office 2016 verze)
    Následně si stáhněte instalační soubory pro Office 365 ProPlus pomocí tohoto návodu, případně je možné pomocí ODT nainstalovat Office 365 ProPlus na referenční počítač a to takto.
     
    Upozornění: Je důležité NENECHAT Office 365 ProPlus automaticky aktivovat. To můžete udělat tak, že provedete tzv. tichou instalaci. Stačí v souboru configuration.xml, který je ODT využíván, přidat následující řádek:
     
    <Display Level="None" AcceptEULA="True" />
     
    3. Je důležité se nepřihlašovat na portál Office 365 a instalovat Office 365 ProPlus z něj. Pokud tak učiníte, aktivace proběhne automaticky.
     
    4. Jakmile je tedy instalace Office 365 ProPlus pomocí nástroje ODT dokončena, neotevírejte žádné aplikace Office. Pokud tak učiníte, jste ihned vyzváni k přihlášení a aktivaci. I když se ale nepřihlásíte a zavřete okno s výzvou k aktivaci, je do systému nahrán dočasný produktový klíč, který ale pochopitelně nechcete mít součástí vaší školní image.Pokud jste omylem některou za aplikací Office 365 ProPlus spustili, je zapotřebí celou instalaci Office 365ProPlus odinstalovat, restartovat počítač a začít znovu.
     
    5. Zkontrolujte, že součástí image není žádný produktový klíč Office 365 ProPlus. Před tím, než vytvoříte finální image referenčního počítače ověřte, zda skutečně není v systému žádný produktový klíč Office 365 ProPlus a to ani dočasný. To provedete pomocí skriptu ospp.vbs, který je automaticky nainstalován s instalací Office 365 ProPlus. Pro jeho spuštění stačí např. v příkazové řádce spustit příkaz:
     
    cscript.exe "%programfiles%\Microsoft Office\Office15\ospp.vbs" /dstatus
     
    Jako výsledek byste měli vidět informaci <No installed product keys detected>. Umístění souboru ospp.vbs se liší dle verze operačního systému a verze Office 365 ProPlus a to dle následující tabulky:
     
    Verze Office 365 ProPlus
    Verze operačního systému
    Umístění souboru ospp.vbs
    32-bit
    32-bit
    %programfiles%\Microsoft Office\Office15\
    32-bit
    64-bit
    %programfiles(x86)%\Microsoft Office\Office15\
    64-bit
    64-bit
    %programfiles%\Microsoft Office\Office15\
     
    6. Uložte školní image – např. pomocí nástroje Windows ADK, Windows Deployment Services nebo Microsoft Deployment Toolkit 2012. 
     
    7. Nasaďte školní image dle běžných postupů, které ve škole využíváte.

    0 0

    Поздравляем вас с Новым годом и Рождеством! Желаем здоровья, успехов и ярких открытий!...(read more)

    0 0

    ZŠ s RVJ Magic Hill v Říčanech u Prahy je partnerskou školou Microsoft. Smyslem tohoto partnerství je spolupráce na ověřování didaktického konceptu tzv. Dovedností pro 21. století (21st Century Learning Design – 21CLD), který je vědeckým výstupem programu Innovative Teaching and Learning Research společnosti Microsoft. Jedná se o studii, v níž experti identifikovali šest hlavních cílů vzdělávání a zároveň dovedností žáků, které se jeví nadčasové a lze u nich předpokládat, že odpovídají na požadavky globálního světa: spolupráce, formování znalostí, řízení sebe sama, řešení problémů a inovace, využívání ICT pro učení a způsob komunikace. Tento model je nyní úspěšně implementován do škol po celém světě. V českém prostředí zavádí ZŠ Magic Hill know-how 21CLD důsledně do svého školního vzdělávacího programu, přičemž výstupy ze spolupráce s Microsoft by měly sloužit jako reference a příklad dobré praxe pro další školy s podobnými cíli.

    V tomto smyslu v Magic Hill příležitostně pořádáme ukázkové hodiny pro studenty učitelství, rodiče, učitele a odbornou veřejnost (např. učební aktivity Dopravní výzkum, Hodina o krvi, Staré pověsti české). Obdobně v prosinci 2015 proběhla prezentace výstupů z projektu dovedností pro 21. století „Traveling through Time“, kterou za společnost Microsoft oficiálně zahájil přestřižením pásky Mgr. Karel Klatovský. V tomto blogu stručně provedu čtenáře nejdříve samotnou akcí a potom zákulisím této učební aktivity, na které pracovali žáci a žákyně z celé školy více jak měsíc.

    Jedním z požadavků na výuku podle 21CLD je tzv. autentické publikum. Výstupy učební aktivity by vždy měly přesahovat hranice třídy. Proto žáci pozvali na prezentaci rodiče a celou akci koncipovali jako postupné skládání mozaiky informací z dílčích přednášek a úkolových situací. Návštěvníci nejdříve vstoupili do 1. třídy, kde dostali instrukce prostřednictvím nachystaného videoklipu a dále připravený pracovní list (vše, co zde uvádím, nachystaly samy děti). Tady se rodiče dozvěděli, že nejprve musejí projít první patro školní budovy, vyluštit 4 šifry (každá třída nachystala jednu) a z nich sestavili pokyn, aby dále pokračovali do tělocvičny. Zde vystavila po celém obvodu 4. třída časovou osu, a jak návštěvníci procházeli, u jednotlivých dějinných mezníků vyslechli od dětí krátkou přednášku a dostávali lístečky s klíčovými fakty. Žáci byli připraveni zodpovědět i doplňkové dotazy a snad není bez zajímavosti, že přednášky byly výhradně v angličtině.

    Vybaveni teoretickými poznatky pokračovali rodiče dále do školní jídelny, kde tentokrát žáci 3. ročníku nachystali dobové exponáty – keramické misky, drátěné šperky, rukodělné modely antického chrámu či prehistorického pralesa s dinosaury, zbraně anebo třeba power pointovou prezentaci o první světové válce. Návštěvníci měli za úkol na lístečky z tělocvičny správně dokreslit daný exponát, čímž informace zhutněla o obrazovou část. Nakonec se šlo k páťákům, s jejichž dopomocí a kontrolou návštěvníci museli své lístečky znovu roztřídit a nalepit ve správném chronologickém pořadí, takže každý si z akce odnášel vlastní a vlastnoruční časovou osu s nejdůležitějšími faktografickými údaji.

    Některé úkoly, co děti naplánovaly, se mohly rodičům jevit jako roztomile naivní, avšak všichni žáci je brali nanejvýš vážně. Důležité bylo, že vyučující při přípravě i organizaci dětem zasahovali do jejich práce opravdu minimálně. Ne všechno tak bylo dokonalé (např. luštění šifer se docela protáhlo), ale rozhodně autentické. V první řadě musela celá školy, třídy a jednotliví žáci týmově spolupracovat. Zorganizovat takovou akci by bylo nad síly jednotlivců, každý, od prvňáka po páťáka, měl svou nezastupitelnou roli ve scénáři. Děti společně rozhodovaly, co a jak se musí nachystat a zařídit. Dalším aspektem této učební aktivity byl důraz na osvojování znalostí. V každém ročníku se v rámci dějepisné látky probírá jiné období, a tak jedinou faktickou možností, jak pokrýt celou časovou osu, bylo dát hlavy dohromady. Přípravou přednášek, kartiček s fakty či výrobou exponátů se žáci stávali experty na jednotlivá období, protože nejlíp se věci učíme, když je sami někomu jinému vysvětlujeme.

    Požadavkem dovednosti sebeřízení podle 21CLD je především, aby aktivita byla dlouhodobá – jedině tak mají žáci dostatek příležitostí promýšlet svoji práci, upravovat ji na základě vzájemné zpětné vazby a vypilovat k dokonalosti. Žáci strávili přípravou projektu více jak měsíc. Znovu podotýkám, že vyučující přispěli pouze vymezením tématu, vlastní realizace pak začala brainstormingem dětí a domlouváním se na podrobnostech. Celá realizace tohoto eventu tak byla problémovou situací sama o sobě – vymyslet scénář, rozdělit práci a role, zajistit si materiál, sladit timing, přičemž děti stále měly na paměti, že se obracejí na reálné publikum (rodiče), které musejí vše naučit tak, aby na konci byli schopni zvládnout výstupní test (řazení faktů do časové osy).

    Zásadním zdrojem studijních informací pro žáky byly samozřejmě encyklopedie a technologie: internet – vyhledávač BING, kancelářský balík Office (především PowerPoint a Word pro nachystání pozvánek, nápisů, šifer, map, leafletů, časových os atd). Konečně šestá dovednost pro 21. století, komunikace, byla zastoupena na několik způsobů. Žáci využívali média filmového klipu, digitální prezentace, vizuálního ztvárnění, výkladové přednášky, excelovských grafů, kryptologie (šifry), sami také pořizovali filmovou dokumentaci z průběhu akce. To všechno podle svých sil v angličtině!

    Tak toto jsou v našem pojetí dovednosti pro 21. století v akci. Určitě někdo namítne, že to není žádná „díra do světa“. Souhlasíme, určitě není – žádné sofistikované technologie a procedury v uměle vytvořených podmínkách. Prostě se škola s 95 žáky ve věku 6-11 let upnula k jednomu společnému tématu, děti si vymyslely scénář a činnosti, rozdělily si role, pozvaly si publikum a pak v určený čas na určeném místě všem ukázaly, co se za měsíc naučily. A nebyly to jen dějepisné informace, ale především schopnost přispět ke společnému dílu a převzít za ně s plnou vážností odpovědnost. Slovem – dovednosti pro 21. století.    

    Jan Voda, ředitel školy, reditel@magic-hill.cz

     


    0 0

    こんにちは。Internet Explorer サポートの植木です。
    この度は、IE11 移行時によくある「アドオンの有効化」に関するお問い合わせをご紹介いたします。

    アドオンをインストールした IE11 の画面上では、以下の様なアドオンの有効 / 無効を問うポップ アップが表示されます。

    このポップ アップの表示を常に非表示にできないか、というお問い合わせ内容をいただいております。

    以下の 2 通りのグループ ポリシーにより、IE11にインストールしたアドオンを有効に設定することで、ポップ アップを非表示にすることができます。

    a)"特定のアドオン" を非表示にしたい場合
    以下の、ポリシー項目を"有効" と指定することで、特定のアドオンを既定で有効として、ポップアップを非表示にできます。

    ~~~~~~~~~~
    [ユーザーの構成 (または、コンピューターの構成
    )]
    - [管理用テンプレート
    ]
     - [Windows コンポーネント
    ]
      
    - [Internet Explorer]
       - [セキュリティの機能]                       

        - [アドオン管理]
         - [アドオンの一覧]

    値の名前:アドオンの CLSID (クラス識別子)
    値:{1
      (アドオンを許可する) または 0  (アドオンを拒否する)}

    ~~~~~~~~~~
    ※アドオンの CLSID は、当該アドオンがインストールされている環境で [アドオンの管理] 画面の [詳細情報] - [Class ID] にて確認できます。


    b)"全てのアドオン" で非表示したい場合

    以下の、ポリシー項目を "有効" と指定することで、全てのアドオンを既定で有効として、ポップアップを非表示にできます。

    ~~~~~~~~~~
    [ユーザーの構成 (または、コンピューターの構成)]

     - [管理用テンプレート]

      - [Windows コンポーネント]

       - [Internet Explorer]

        - [新たにインストールされたアドオンを自動的にアクティブ化する]
    ~~~~~~~~~~
     


    上記の設定により、IE11 をご利用のユーザー様の環境等でポップ アップを非表示に設定するなど、

    ニーズに応じた利用方法をご検討いただければ幸いです。

     

    また、当ブログでは、関連する内容の記事を以下でも公開しております。

    こちらも併せてご参考にしていただければ幸いです。

     

     Internet Explorer Big Changes! (4) - アドオン管理の強化 (IE9 ~)

     http://blogs.technet.com/b/jpieblog/archive/2014/07/29/3635169.aspx

     


    0 0

    Today’s Tip…

    You can add a custom named SQL Azure server now via PowerShell using the resource manager API.

    Just follow these steps:

    image


older | 1 | .... | 809 | 810 | (Page 811) | 812 | 813 | .... | 889 | newer