Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Resources for IT Professionals

older | 1 | .... | 804 | 805 | (Page 806) | 807 | 808 | .... | 889 | newer

    0 0

    Microsoft a annoncé le lancement de Windows Server 2016 et Système Center 2016 prévu au cours du 3ème trimestre calendaire 2016.

    Les règles licensing (incluant le SPLA) pour Windows Server 2016 et System Center 2016 seront différentes des versions précédentes (2012 R2 et antérieures) :

    • Le licensing passe d’un mode processeur à un mode par cœur physique avec un minimum de 8 licences cœurs par processeur physique.
    • Les licences cœurs seront vendues par pack de 2 cœurs dont le prix sera 25% du prix de la licence 2012R2 par processeur physique. Donc en commandant 8 licences cœurs par processeur physique (soit 4 références double Cœurs), le prix sera équivalent à la licence par processeur.
    • Si vous choisissez de migrer vers la dernière version dès la disponibilité de Windows Server 2016 vous devrez appliquer le Licensing pour les serveurs 2016.
    • Si vous utilisez les versions Windows Server antérieures (2012 R2 et précédentes) les règles du SPLA vous autorisent à utiliser les anciennes références jusqu’au terme de votre contrat.


    0 0

    Bloquez dès à présent votre agenda et venez assister aux Hosting Days, notre événement annuel qui vous donnera la vision et la stratégie Cloud Hybride de Microsoft. Un champs d’opportunités pour les hébergeurs à saisir dès maintenant !

    Infrastructure Hybride, Mobilité, gestion des données, applications de productivité. Découvrez comment saisir ces nouveaux axes de croissance, les dispositifs Microsoft mis à votre disposition, les accompagnements techniques et les facteurs clés de succès. Hébergeurs, Infogéreurs, éditeurs de logiciels, Services providers ? Optimisez vos offres existantes avec de nouveaux services à valeur ajoutée et maîtrisez le coût de votre infrastructure. Toute l’équipe Microsoft Hosting France vous accueillera le 28 janvier 2016 sur le campus Microsoft de 9h30 à 17h00.

    Lien d'inscription et agenda à venir


    0 0


    現象
    Office 365 ProPlus や Office 365 Solo など、クイック実行版 (C2R 方式) の Outlook 2016 で、msg ファイル形式に保存したメールを送信すると本文が空白(ブランク・空)になる問題が発生します。
    ※Outlook 2013 につきましてはこちらの記事をご確認ください。

    再現手順
    1. [新しい 電子メール] から新規メールウィンドウを開いて、件名、本文を入力します。
    2. 作成したメールアイテムを msg 形式でデスクトップなどに保存します。
    3. 保存した msg 形式のメールアイテムを開いて送受信します。
    4. 送信済メールまたは受信したメールを確認します。

    - 結果
    メールアイテムの本文が空白(ブランク・空)になります。

    - その他情報
    メールの形式が HTML 形式またはリッチ テキスト形式で事象の発生を確認しています。テキスト形式の場合は事象が発生しません。


    発生条件
    以下の両方の条件を満たす環境で発生する場合がございます。

    - 条件 1
    クイック実行版の Outlook 2016 で最新の修正が適用され、詳細バージョンが 16.0.6366.2036 以降の場合
    ※ 詳細バージョンについては後述の "詳細バージョンの確認手順" をご確認ください。

    - 条件 2
    キャッシュ モード オンで Exchange サーバーに接続している環境
    または
    他の POP および IMAP Server に接続している環境


    対処方法
    msg 形式を oft (テンプレート形式) に変更して利用することで事象を回避することが可能です。

    - 手順
    1. [新しい 電子メール] から新規メールウィンドウを開いて、件名、本文を入力します。( または、既存の msg ファイルを開きます。)
    2. [ファイル]-[名前を付けて保存] で [ファイルの種類] から [Outlook テンプレート (*.oft)] を選択して、 oft 形式でデスクトップなどに保存します。
    3. 保存した oft 形式のメールアイテムを開いて送受信します。
    4. 送信済メールまたは受信したメールを確認します。


    調査状況
    現在、Microsoft では、本事象について修正に向けて調査中です。
    情報のアップデートや修正が確定いたしましたら本 Blog でも情報を更新いたします。


    詳細バージョンの確認手順
    1. Outlook 2016 を起動します。
    2. [ファイル] タブを選択します。
    3. 左ペインで [Office アカウント] を選択します。
    4. 右ペインで表示されたバージョンを確認します。

    上記の箇所でバージョンが 16.0.6366.2036 以上の場合に、本現象が発生します。
    なお現時点では、クイック実行版の Outlook 2016 を Current Branch モデルで展開されているお客様のみが当該バージョンとなります。

    - 補足
    ボリューム ライセンス版 (MSI 方式) の Outlook 2016 では、最新の修正を適用しても本現象は発生しません。
    ボリューム ライセンス版の場合は以下のように、”Microsoft Office Professional Plus 2016” と表示されます。


    本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。


    0 0

    Если не получается что-то скачать с PartnerSource ( KBили CU).

    Ругается ( error ) посленажатияView and download the hotfix …

    Если Вы пользуетесь InternetExplorer- Попробуйте открыть ссылку в InPrivateBrowsing:


    0 0

    Every so often I get the crazy urge to clean up my photo, music or video content and when I do I love to see specific columns in File Explorer.  For Photos, its Date Taken and maybe even Camera Model.  For Music, its Album Size and Protected (to sort between music I've purchased or not)

     

    The problem I had was keeping those settings for those folders.  I did some searching and found articles and notes on changing the registry etc, but finally found that you just need to make a few simple changes: Setting the Media template and saving the customizations to that media template.

     

    First,  Select the parent folder and set it to Music, Pictures, Documents, Video or General (whichever you want to customize)

    Let's stick with Music for this example:

     

    Right click on the parent folder and select Properties ->Customize.

    Select the template you want to use.

    Make sure to select "Also apply this to all subfolders"

    Click OK to exit.

    Next, customize the folder view with the columns that you want:

    Right client anywhere in the red box area…

    Select More if you don’t see the attribute you want to add on the quick list.

    For Example - I've added Protected and Size.

    For music I've also sorted by # so all tracks will appear in order.

     

    Once you have that setup the way you want it, go to View on the File Explorer menu and select Options -> Change Folder and Search options:

    Under view click Apply to Folders.  This will apply the changes you made above to all folders marked as Music.

    Repeat customizations for Pictures, Documents or just General folders.

    Hope that Helps!


    0 0

    Summary: Use Azure PowerShell cmdlets to send a Desired State Configuration file to be used later by a virtual machine.

    Hey, Scripting Guy! Question How can I use Windows PowerShell to send a DSC file to be used later by a virtual machine?

    Hey, Scripting Guy! Answer Use the Publish-AzureVMDscConfiguration cmdlet and specify the path to your DSC script.
               The following example targets the DSC script called FileServer.ps1:

    Publish-AzureVMDscConfiguration -ConfigurationPath ".\FileServer.ps1"


    0 0

    Dezembro 17, 2015  por Aaron Kornblum


    Na Microsoft, estamos trabalhando continuamente para oferecer nosso compromisso com a segurança dos nossos clientes e seus ecossistemas. Um componente essencial da nossa estratégia para informar os usuários do Windows sobre a segurança dos sites e aplicativos de software que está acessando online está integrado na Microsoft Trusted Root Certificate Program. Este programa tem certificados raiz fornecidos por autoridades de certificação autorizados (CAs) ao redor do mundo e é enviado ao seu dispositivo para saber quais programas aplicativos e sites, são confiáveis pela Microsoft.

    Os nossos esforços para proporcionar uma experiência segura e ininterrupta normalmente acontece no backgound, mas hoje, queremos dizer a você sobre algumas mudanças que fizemos neste programa. Estas alterações cruciais nos ajudarão a ter uma melhor proteção contra ameaças em evolução que afetam o ecossistema de aplicativos e sites, mas eles podem afetar um pequeno conjunto de clientes que possuem certificados de parceiros.

    Na primavera passada, começámos a colaborar com as autoridades de certificação (CA) para solicitar feedback e falar sobre alterações futuras do nosso programa de certificação de raiz confiável. Entre outras coisas, as alterações incluídas são tecnicamente mais rigorosas e usa requisitos de auditoria.

    O programa final das alterações foi publicado em Junho de 2015. Desde então, estamos trabalhando diretamente e através de fóruns da comunidade, para ajudar nossos parceiros a compreenderem e cumprirem os novos requisitos do programa.

    Através deste esforço, identificamos alguns parceiros que deixarão de participar no programa, porque escolheram abandonar voluntariamente ou porque eles não vão estar em conformidade com os novos requisitos. Publicamos uma lista completa de Autoridades de certificação (Certificate Authorities) abaixo que estão fora da conformidade ou voluntariamente escolharam abandonar o programa e terão as suas raízes removidas do armazenamento Trusted Root CA Store em Janeiro de 2016. Encorajamos todos os proprietários de certificados digitais atualmente confiáveis pela Microsoft para rever a lista e tomar as medidas necessárias.

    Os serviços de certificados dependentes que você gerencia poderá será prejudicado se a cadeia de certificados que você utiliza até um certificado raiz da Microsoft for removido da loja. Embora o texto e as telas reais variam dependendo do navegador que um cliente está usando, aqui está o que normalmente acontecerá:

    • Se você usar um desses certificados para conexões seguras para o seu servidor por https, quando um cliente tentar navegar para o seu site, o cliente irá ver uma mensagem de que existe um problema com o certificado de segurança.

    • Se você usar um desses certificados para assinar o software, quando um cliente tentar instalar esse software em um sistema operacional Windows, o Windows exibirá um aviso de que o Publisher pode não ser confiável. Em qualquer caso, o cliente pode optar por continuar.

      Recomendamos a todos os proprietários de certificados digitais atualmente confiáveis pela Microsoft para rever a lista abaixo e investigar se os seus certificados estão associados a qualquer das raízes que nos serão retiradas como parte da atualização. Se você usar um certificado que foi emitido por uma destas empresas, recomendamos que você obtenha um certificado de substituição a partir de outro fornecedor de programas. A lista de todos os prestadores de serviços está localizado neste link:  http://aka.ms/trustcertpartners.

      Com o Windows 10 continuaremos a trabalhar arduamente para fornecer a você experiências mais seguras que você espera do Windows, mantendo sempre você no controle.

      Certificate Authorities que serão removidos em Janeiro de 2016

       

    CA 

    ​Root Name

    SHA1 Thumbprint

    ​Certigna

    ​Certigna

    ​B12E13634586UM46F1AB4ACFD2606837582DC9497

    ​Ceska Posta

    ​PostSignum ACQ raiz 2

    ​A0F8DB3F0BF417693B282EB74A6AD86DF9D448A3

    ​CyberTrust

    Serviços de Certificação ​Japan, Inc. SecureSign RootCA1

    E6419​CABB51672400588F1D40878D0403AA20264

    ​CyberTrust

    Serviços de Certificação ​Japan, Inc. SecureSign RootCA2

    ​00EA522C8A9C06AA3ECCE0B4FA6CDC21D92E8099

    ​CyberTrust

    Serviços de Certificação ​Japan, Inc. SecureSign RootCA3

    ​8EB03FC3CF7BB292866268B751223DB5103405CB

    ​DanID

    ​DanID

    ​8781C25A96BDC2FB4C65064FF9390B26048A0E01

    ​E-Certchile

    ​E Certchile de CA raiz

    ​C18211328UM92B3B23809B9B5E2740a07fb12EB5E

    ​E-Tugra

    ​EBG Sertifika Hizmet Saglayicisi Elektronik

    8C96BAEBDD ​2B070748EE303266A0F3986E7CAE58

    ​E-Tugra

    A autoridade de certificação ​E Tugra

    ​51C6E70849066EF 392D45CA00D6DA3628FC35239

    ​LuxTrust

    ​LuxTrust global CA raiz

    ​C93C34EA90D9130C0F03004B98BD8B3570915611

    ​Nova Ljubljanska

    ​NLB Nova Ljubljanska Banka d.d. Ljubljana

    ​0456F23D1E9C43AECB0D807F1C0647551A05F456

    ​Post.Confiança

    ​Post. CA raiz de confiança

    ​C4674DDC6CE2967FF9C92E072EF8e8A7FBD6A131

    ​Secom

    Sistemas Co Ltd ​SECOM confiança

    ​36B12B49F9819ED74C9EBC380FC6568F5DACB2F7

    ​Secom

    Sistemas de confiança ​SECOM CO LTD

    ​5F3B8CF2F810B37D78b4PECO1919C37334B9C774

    ​Secom

    Sistemas de confiança ​SECOM CO LTD

    ​FEB8C432DCF9769ACEAE3DD8908FFD288665647D

    ​Serasa

     Autoridade de certificado ​Serasa I

    ​A7F8390BA57705096FD36941D42E7198C6D4D9D5

    ​Serasa

     Autoridade de certificado ​Serasa II

    ​31E2C52CE1089BEFFDDADB26DD7C782EBC4037BD

    ​Serasa

     Autoridade de certificado ​Serasa III

    ED18028​9FB1e8A78909701480UM DAC A5973DFF871

    ​Wells Fargo

     Autoridade de Certificação Pública ​WellsSecure

    ​E7B4F69D61ce9069DB7E90A7401A3CF47D4FE8EE

    ​Wells Fargo

     Autoridade de Certificação Raiz pública ​WellsSecure 01 G2

    ​B42C86C957FD39200C45BBE376C08CD0F4D586DB

     

    Como determinar seus certificados digitais

    Se você não tiver certeza de como determinar a raiz de seus certificados digitais, aqui você tem algumas orientações, por navegador. Para obter mais informações sobre o programa em si, visite http://aka.ms/rootcert.

    Microsoft Edge

    1. Navegue para uma página da web que utiliza o seu certificado.
    2. Clique no ícone do lock/cadeado (no campo de endereço da web); a empresa em "Identificação" é o site de empresa que detém o certificado raíz.

     

    Internet Explorer

    1. Navegue para uma página da web que utiliza o seu certificado.
    2. Clique no ícone de bloqueio (no campo de endereço da web).
    3. Clique em Exibir Certificados, e então no Caminho de Certificação.
    4. Exibir o nome do certificado no topo do caminho do certificado.

    Chrome

    1. Navegue para uma página da web que utiliza o seu certificado.
    2. Clique no ícone de lock/cadeado (no campo de endereço da web).
    3. Clique em Connection e seguida informações do certificado.
    4. Clique em o Caminho de Certificação.
    5. Exibir o nome do certificado no topo do caminho do certificado.

    Firefox

    1. Navegue para uma página da web que utiliza o seu certificado.
    2. Clique no ícone de lock/cadeado (no campo de endereço da web) e depois clique na seta à direita.
    3. Clique em Mais informações e depois Exibir Certificado.
    4. Clique em Detalhes.
    5. Exibir o nome do certificado no topo do caminho do certificado.

    Aaron Kornblum Enterprise & Gerente de Programa do Grupo de segurança, governança, gestão de riscos e conformidade

     

     

    Original: https://blogs.technet.microsoft.com/mmpc/2015/12/17/microsoft-updates-trusted-root-certificate-program-to-reinforce-trust-in-the-internet/ 


    0 0

    Hi, Jessica Payne from Microsoft Enterprise Cybersecurity Group's Global Incident Response and Recovery team guest starring on the Platforms PFE blog today.

    Credential theft is a major problem in the security landscape today. Matching local administrator passwords in an environment often contribute to that problem and are a popular target for bad guys. Far more than zero days or malware, credentials are what allow attackers to be successful in your network. I think this is best summed up by John Lambert from Microsoft Threat Intelligence Center.

     

    Randomizing the local administrator password has always been part of Microsoft guidance such as the Pass the Hash Whitepaper, however outside of solutions provided via a Premier offering we didn't have a supported Microsoft way to do this.

    On May 1st 2015, Microsoft released LAPS. LAPS stands for Local Administrator Password Solution, and it exists to address the problem of having a common administrator password in an environment. LAPS is a fully supported Microsoft product that is available for free! (Or "at no additional charge" as some of my colleagues would want me to say.) I've done a Taste of Premier episodeon the technology, but wanted to do this post for the people who prefer blog posts as well.

    LAPS is designed to run in a least privilege model. No need to put a service account into the domain admins to manage passwords, the password resets are done in the context of the computer/system. There's no additional server to install - the passwords are stored in Active Directory. This has led to some interesting discussion on the Internet, with some saying "that makes AD a clear target." Active Directory has always been a clear target for attackers, and has always held "keys to the kingdom" that would allow an attacker to take complete control of an infrastructure. That's why we really want you to be aware of what the threats look like and how to configure and administer AD in a secure manner (Best Practices for Securing Active Directory, The Pass the Hash Whitepapers  and my talk on Securing Lateral Account Movement are good references for that.) By storing the passwords in AD, we're piggybacking on the controls you already should have in place to protect against Pass the Hash, Domain Admin level compromise, the Golden Ticket post exploitation technique, etc. LAPS, just like many other security controls, should be seen as part of a holistic solution. Just taking care of local administrator passwords is a great step and a massive reduction in overall attack surface, but without the other mitigating controls in an environment it's absolutely true that attackers will still be able to gain a foothold and compromise your entire network. Randomizing local passwords is just a step in a security strategy, but it's a necessary step which is now easy and freewith LAPS.

     

    Installation

    Installing LAPS is actually really straightforward. The guide included in the download has a great walk through, and I ran through a demo install (as well as discussion on implementation strategies) in my Taste of Premier video on LAPS.

    Gotchas/Weird Stuff/Implementation Lessons Learned about LAPS :

    • Your biggest challenges are going to be developing a delegation model and a workflow for using the passwords. If your OU structure isn't laid out based on policy boundaries, or if you don't already have well defined Role Based Access this can will be a challenge. Your workflow for accessing the passwords will dictate a lot of how you design the access. Are you planning on using the passwords ever? Are you just trying to stop Lateral Movement, so you see it just being a break glass account and using DART disks instead? If you do have people access them, that should decrease your password expiration time - make the credential viable for less time. You may even use an orchestration engine to front the password checkouts, which has access to the ms-MCS-admpwdexpirationtime attribute to make sure it resets right after use.
    • LAPS only randomizes one local account password. By default, it randomizes the built-in admin account and discovers it by well-known SID. A different local account can be specified via GPO, but bear in mind it is discovered by name. So if I'm Bad Guy Bob using an Elevation of Privilege in win32k.sys or Steve the Rogue Admin, having access even temporarily + rename account = permanent access. Account discovery and management is static in a lot of password products, not just LAPS, so it also means someone with access could create another local account and add it to admins and have permanent access - this is actually something we've seen real bad guys do.  Local accounts are tricky things to manage, which is why we created the Local Account principle. The strategy I suggest to my customers is to have 1 (one) local administrator account - the built-in one. The built-in 500 SID account is always there, always an admin and always something you can re-enable if you know what you're doing. Embrace that it's there, that LAPS will always find it and manage it. Which leads to …
    • Make LAPS part of your larger Credential Theft Mitigation strategy. Implement the steps in the Pass the Hash whitepaper, use Restricted Groups to be authoritative on who is an admin, deny Local Accounts access over the network and come up with a secure way to manage machines, such as RestrictedAdmin RDP with a non-admin account and elevation/RunAs locally with the LAPS managed password.
    • Monitor for the use and creation of local accounts. These are Indicators of Compromise and the successful logon of the local administrator account is a far more accurate metric of danger than auditing access to the password in many organizations.
    • Monitor for Lateral Movement on the whole in your environment.Stopping Lateral Account Movement from stolen credentials via Credential Hygieneand preventing the attacker wandering unfettered around your network is the #1 thing that would have made the Incident Responses I’ve been to this year less of an Incident.Even if you are using another password management product, Credential Theft Mitigation and stopping Lateral Account Movement are critical tasks for your environment.I'm doing a whole series on tracking Lateral Movement on the blog, so please feel free to follow along.
    • Since ms-MCS-adminpwd only stores one password some customers have expressed concerns for what this means for a system restored from backup. The supported scenario there would be to reset the password with a supported tool such as DART. (DART is really cool btw, you can have it access BitLocker recover keys and build all sorts of interesting actions into it. DART is a fully supported Microsoft product and a great "known good publisher" alternative to going out and downloading a hacking tool if you need to reset a password.)
    • There's a GPO setting for "do not allow Password Expiration Time longer than defined by policy." I consider that to be a non-optional option, as enabling this option means if someone sets the password expiration to be 300 days, LAPS will say "I think you meant 30 days."
    • Native LAPS auditing is …not optimal. The password is stored in as an AD attribute, which means access is tracked via AD Attribute logging, Event ID 4662. The bad part about that is it can get really noisy, for instance if someone who has access to the password opens attribute editor it's going to show as a password read even if they didn't mean to. Kurt Falde's made great improvements to the auditing via Windows Event Forwarding and PowerBI on his blog.
    • Access to the password is granted via the "Control Access" right on the attribute. Control Access is an "Extended Right" in Active Directory, which means if a user has been granted the "All Extended Rights" permission they'll be able to see passwords even if you didn't give them permission. LAPS includes the Find-AdmPwdExtendedrights cmdlet to track down who has those permissions.
    • It's really straightforward to migrate from the unsupported AdmPwd solution or the SLAM scripts that LAPS has replaced, most people have no issues with it. If you have an issue moving from the supported SLAM scripts to LAPS, open a Premier case and we'll help you out.
    • Learn what really happens during an Incident Response and what attackers are actually doing to get into your network. The state of security now in most IT Organizations is borderline panic and a rush to "secure all the things" and knowing what attackers do and allows you to prioritize what to fix first (Spoiler alert: Credential Hygiene, that's what you fix first.) :)

     

    Plaintext password storage 

    One of the discussions that frequently comes up during LAPS implementations is the fact the password is stored in plain text. Applying the proper ACLs to the attribute made this a non-issue in most environments. If you don't have access to the passwords, you can’t see them. We're securing access to the attribute (along with the entire directory) versus worrying about a case where the directory is already lost.  

    There are other plaintext high value attributes in AD such as Bitlocker keys and due to the nature of secrets stored in AD loss of control of the database can lead to deeper compromise through other non-plaintext avenues. Strong ACLs and overall Credential Hygiene are the strategy to be using anyway, and applying them to LAPS is just another step. We did threat model the scenarios where plain text would be part of the attack below. Remember that LAPS is just part of the Credential Theft Mitigation strategy and LAPS attributes are just part of the veryhigh value data you need to protect in Active Directory.

    Attack strategies to take advantage of plain text password storage:

    1. Acquire a copy of the NTDS.dit (Active Directory Database.) The passwords would be in plain text, meaning the attacker doesn't have to crack them. This attack vector is superfluous though, because if they have your NTDS.dit, they don't need to crack the passwords because of techniques like Pass the Hash. The machine computer account passwords are stored hashed in Active Directory just like user account passwords so the attacker could already have admin/system level access on those computers without the local administrator passwords. Additionally, the AD database contains far more powerful accounts of interest than local admin accounts - Domain Admins, high value users and the KRBTGT account for Golden Ticket creation. While the passwords are in plaintext, capture of the NTDS.dit is already game over, so the plain text doesn't add additional attack surface here in our opinion. You should already be protecting your AD against theft, so having the local admin passwords there doesn't really affect the value of AD or the need to protect it.

    2. Steal the credentials or compromise the computer of someone with access to the passwords, access admin passwords for multiple computers in domain.  In most environments, the initial stolen credential would be someone with wide reaching admin access for all of the computer accounts they were delegated ms-MCS-admpwd attribute access to - a help desk or desktop engineer so - this isn't really increasing the attack surface in this scenario. It can actually reduce the time to detection in some ways, or at least provide better monitoring for the compromise. Without the LAPS delegation, the theft of the single desktop engineer level credentials would mean instant deep/wide privilege in the domain (CEO's computer for instance.) Abusing LAPS password delegation to gain this access means they would generate a very clear audit trailas they will have to work for each password. As the worst, this is likely a net equal. The basis of the attack is that a single account had unrestricted access to assets, but that has nothing to do with plain text storage (or which credential vault you are using since it's just stealing the identity of someone who would have access.)

    3. To make use of the fact this is plaintext over the wire you would have to then use that stolen identity to open a tool such as LDP.exe that would send the password in plaintext over the network and then sniff the credentials. Since they already had access to the credentials, this threat vector would fall into the category of "post exploit technique" and is also superfluous. Active Directory Users and Computers, Powershell and the LAPS UI all send the password in an encrypted/obfuscated traffic channel. So if you provision the password access only to secondary admin accounts locked down to use from a known good source such as an admin workstation/jump server that is already secured with software restrictions, credential tiering and network policies as recommended, this attack vector isn't likely to be the thing an attacker goes for.

    LAPS is just one part of a larger Credential Theft Mitigation and monitoring strategy, but an important one that you can implement for free. Hopefully this helps you on the way to a holistic security strategy.

     

     

    Here's some links to the resources I talked about:

    Pass the Hash Whitepapers:

    https://microsoft.com/pth

    Best Practices for Securing Active Directory:

    https://aka.ms/bpsadtrd

    Channel9 video on LAPS:

    https://channel9.msdn.com/Blogs/Taste-of-Premier/Taste-of-Premier-How-to-tackle-Local-Admin-Password-Problems-in-the-Enterprise-with-LAPS

    Blog posts on getting basic monitoring with Windows Event Forwarding in place and Tracking Lateral Movement:

    http://blogs.technet.com/b/jepayne/archive/2015/11/24/monitoring-what-matters-windows-event-forwarding-for-everyone-even-if-you-already-have-a-siem.aspx

    http://blogs.technet.com/b/jepayne/archive/2015/11/27/tracking-lateral-movement-part-one-special-groups-and-specific-service-accounts.aspx

    Detailed LAPS auditing building upon Windows Event Forwarding:

    http://blogs.technet.com/b/kfalde/archive/2015/11/18/laps-audit-reporting-via-wef-posh-and-powerbi.aspx

    -Jessica "http://aka.ms/jessica" Payne @jepayneMSFT

    (With a little editorial help and moral support from John Rodriguez and Aaron Margosis)


    0 0

    2015 年の日本マイクロソフトの営業日および MPN ブログへの投稿は本日で終わりです。今年もマイクロソフト製品をご愛顧いただきまして、また、MPN ブログをご愛読いただきまして誠にありがとうございました。来年は 1 月 5 日より営業およびブログへの投稿を開始しますので、引き続きどうぞよろしくお願いします。

    さて、2015 年最後の記事では、パートナー様必見の 12 月のアップデートが更新されていますのでお知らせします。お時間のある時にぜひご覧ください。

    閲覧には MPN アカウントでログインが必要なものもありますのでご注意ください。

     

    凡例

    [イベント] … パートナー様向けイベント情報
    [セミナー] … パートナー様向けセミナー情報
    [トレーニング] … スキルアップ支援情報
    [セールス] … 営業/マーケティングご担当者向け情報
    [テクニカル] … 技術者向け情報                

    [サポート] … サポート情報
    [製品] … 製品情報
    [ライセンス] … ライセンスに関する情報
    [プログラム] … パートナーネットワークに関する情報

    過去の毎月の最新情報はこちらをご参照ください。

    ▼ マイクロソフト最新情報

     

    過去のニュースはこちら

    ▼ 最新情報: Microsoft Partner Network の 2015 年 11 月の情報まとめ


      0 0

      Over on the Microsoft Deployment Toolkit Team Blog Aaron Czechowski announced the availability of MDT 2013 Update 2. For those of you deploying Windows 10 images you will definitely want to take a look at what's new. Here's what Aaron posted about the updates.

      MDT 2013 Update 2 is primarily a quality release; there are no new major features. The following is a summary of the significant changes in this update:

      • Security- and cryptographic-related improvements:
        • Relaxed permissions on newly created deployment shares (still secure by default, but now also functional by default)
        • Creating deployment shares via Windows PowerShell adds same default permissions
        • Updated hash algorithm usage from SHA1 to SHA256

      • Includes the latest Configuration Manager task sequence binaries
      • Enhanced user experience for Windows 10 in-place upgrade task sequence
      • Enhanced split WIM functionality
      • Fixed OSDJoinAccount account usage in UDI scenario
      • Fixed issues with installation of Windows 10 language packs
      • Various accessibility improvements
      • Monitoring correctly displays progress for all scenarios including upgrade
      • Improvements to smsts.log verbosity
      • Fixed Orchestrator runbook functionality

       

      It's available now on the Microsoft Download Center and here are some of the details from the download page.

      Details

      Note:There are multiple files available for this download.Once you click on the "Download" button, you will be prompted to select the files you need.

      Version:

      6.3.8330

      File Name:

      MicrosoftDeploymentToolkit2013_x64.msi

      MicrosoftDeploymentToolkit2013_x86.msi

      Date Published:

      12/1/2015

      File Size:

      19.2 MB

      18.7 MB

          Microsoft Deployment Toolkit (MDT) 2013 Update 2 is for operating system deployment leveraging the Windows Assessment and Deployment Kit (ADK) for Windows 10.



      Feature Summary

          MDT is the recommended process and toolset for automating desktop and server operating system deployments. MDT provides you with the following benefits:

       

          • Unified tools and processes, including a set of guidance, for deploying desktops and servers in a common deployment console.
          • Reduced deployment time and standardized desktop and server images.


      Some of the key changes in MDT 2013 Update 2 are:

        • Support for the latest Windows ADK for Windows 10.
        • Improvements to deployment and upgrade of Windows 10.
        • Support for the current branch of System Center Configuration Manager (currently version 1511).

       

      System Requirements

      Supported Operating System

      Windows 10 , Windows 7, Windows 8, Windows 8.1, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server Tech Preview

        • Other Software Requirements
        • The Windows ADK for Windows 10 is required for all deployment scenarios.
        • System Center Configuration Manager, version 1511 or later, is required for zero-touch installation (ZTI) and user-driven installation (UDI) scenarios.
        • If you are using ZTI and/or UDI, you are allowed to add the MDT SQL database to any version of System Center Configuration Manager with SQL Technology; if you are using LTI, you must use a separately licensed SQL Server product to host your MDT SQL database.

       

      Install Instructions

      1. To start the download, select a file from the list of Files in this Download and then click Download.
      2. To run the .msi file from its current location, click Run.
      3. Accept the Microsoft Software License Terms.

      • Follow the steps in the installation process to complete the installation.
      • To start Deployment Workbench, look in All apps in the Start menu for Microsoft Deployment Toolkit.

      1. Or, to save the .msi file to your computer for later installation, click Save.

       

       


      0 0

      I did a guest post over on the Ask PFE Platforms blog about the Local Administrator Password Solution (LAPS) this week. You can check it out here :

      http://blogs.technet.com/b/askpfeplat/archive/2015/12/28/local-administrator-password-solution-laps-implementation-hints-and-security-nerd-commentary-including-mini-threat-model.aspx

      -Jessica @jepayneMSFT 


      0 0

      A few weeks ago, I posted an article on configuring SMB file shares in Azure to leverage shared drives from within Azure or from on-premise. But what if you are unable to access these SMB file shares due to port restrictions? Or if you only need to copy files to and from Azure storage, and do not require SMB file shares ? AzCopy is a simple yet powerful command line interface that allows you to copy files to and from Azure storage and between Azure storage accounts. The latest version, 5.0, now supports...(read more)

      0 0
    • 12/28/15--22:42: VMM Tips. リモート管理
    • こんにちは、日本マイクロソフト System Center Support Team の益戸です。
      師走のお忙しいなか、いかがお過ごしでしょうか。

      システムセンターサポートチームも、年末年始を気持ちよくお客様に迎えていただくべく、全力でお問い合わせに注力しております。

      VMM をご利用いただくお客様から、良く聞かれるリモート管理について今回はご紹介します。

      <VMM 管理コンソール vs Hyper-V / フェールオーバークラスターマネージャー>

      例えば VM の起動停止。例えば、ISO のマウント。例えば、VM のフェールオーバー。
      VMM 管理コンソールでも、フェールオーバークラスターマネージャーからでも問題なくできます。
      もちろん、技術的にはどちらから実施いただいても構いません。
      しかしながら、VMM 以外から実行すると、まれにステータスの不整合が発生します。
      例えば、クラスターのノードが VMM 管理コンソールと、クラスターマネージャー上の表示が異なる等が発生します。
      そんな時は、以下で情報更新をすることで整合性を取る事ができます。


      <Hyper-V ホストに対して>


      ※ 最新の情報に更新 = Hyper-V ホストサーバーのプロパティ情報を最新に更新します。
      ※ バーチャルマシンの更新 = Hyper-V ホストサーバーがホストしている仮想マシンの一覧を更新します。

      <仮想マシンに対して>
       
      ※ 仮想サーバーのプロパティ情報を更新します。

      なるべく、不整合が起きないようにということで、以下のような KB もご案内しております。

      System Center Virtual Machine Manager 2008 R2 または System Center 2012 Virtual Machine Manager で管理しているフェールオーバー クラスター、ホスト、仮想マシンの設定や構成を、フェールオーバー クラスター マネージャーや Hyper-V マネージャーから変更すると、データの不整合が発生する場合がある
      https://support.microsoft.com/ja-jp/kb/2810814

      しかしながら、全ての操作を VMM 管理コンソールから実行することは難しく、場合によってはHyper-V / フェールオーバークラスターマネージャーを使わざるをえない場面もあります。
      そんな中、実は Hyper-V サーバーは Server core で GUI がないという場合もあるかと思います。また、VMM から Hyper-V を追加した場合も、同様となります。
      そういった場合、どうやって Hyper-V / フェールオーバークラスターマネージャーを利用するのか。

      そんなときは、ぜひ VMM サーバーに対して役割の追加にて、Hyper-V とフェールオーバークラスターマネージャーの管理ツールの導入を検討ください。

      VMM サーバーからは、すべての Hyper-V サーバーに対してアクセス可能ですし、もともと集中管理をするためのサーバーなので、役割としても適切です。

      VMM 管理コンソールから操作できる部分は、VMM 管理コンソールから、それ以外の場合は、Hyper-V または、フェールオーバークラスターマネージャにて、一つのコンピューターから実行できる環境となります。

      ぜひ、運用のご参考となれば幸いです。


      0 0

      This week's interview is with Sergey Vdovin!

      sergey vdovin (aka evolex)'s avatar 

      Some of his stats:

      • 7 Wiki articles
      • 82 Wiki edits
      • 16 Wiki comments
      • 167 Forum replies

      He's published 7 quality articles. Here are some examples:

      SSRS: End-user defined parameters set and cascading hierarchy

      SSAS: Slicing and dicing over data differences between SSAS databases

      SSRS: Merge data from different datasources into one dataset inside SSRS report

      SSAS, SSIS: Extending Project Server OLAP Cubes via push-mode processing 

         

      It's great to FINALLY be able to interview Sergey, and we think you're to love it! So let's dig in...

      ==========================

       

      1. Who are you, where are you, and what do you do? What are your specialty technologies? 

      Hello to the community. It is quite an exciting moment - the interview is a kind of a personal direct assessment while, for example, the guru competition is a kind of indirect assessment - writing the articles was a little bit easy. 

      My name is Sergey [Aleksandrovich] Vdovin (sergeyAvdovin.com) i have some expertise in solutions construction based on Microsoft Business Intelligence and Project Server platforms. Originally i am from Sverdlovsk, USSR (Yekaterinburg, Russia), the places that influenced me the most are depicted on the picture: 


      So most of the time I spent in Yekaterinburg: in school (scientific university center) in a class with in-depth study of physics and mathematics we discovered some interesting things like Cantor's theorem and Schrödinger equation. First two courses of Ural State Technical University (now a part of URFU)  were a pretty relaxing joinery after the scientific canter and I was promoted by our radio-technical faculty dean to an energy saving scientific firm inside the university – there I had an opportunity to try myself in different fields – electronics engineering, civil engineering and software development. The latter was the most exciting and I started to investigate software development around database platforms. 

      At some point SQL Server appeared and following a pretty common evolution path the experience covered a lot of elements of the Microsoft Business Intelligence Stack plus several years of experience with Project Server.  In parallel I was proposed by one of our university professors from Institute of Mathematics and Mechanics, Ural Branch of the Russian Academy of Sciences to try some researches, this yielded in some PhD related activities (not finished, but still in mind). At some point scientific and professional paths intersected in Young Scientists Summer Program at International Institute of Applied Systems Analysis with a pretty exсiting research and experience with SQL Server and Wal-Mart: Using Data Mining for Explanation and Prediction of Systems Behavior. After that I moved to Moscow for several years:

      And I visited the exciting TechEd North America 2013 (first time in the U.S.):


      I lived 1,5 years in Bali, with a visit to Australia:


      And now again I'm in Yekaterinburg, thinking about new opportunities:

      2. What are your big projects right now? 

      Currently i have quite an important and interesting project - job searching with, fortunately, relocation - it is still not in a very active state - more like preparations, with relocation, probably (not really sure), somewhere in that direction:

      Getting to Si, Ja, Oui, Hai, and Da 

      It is just one example of the numerous statistics that shows that probably I will be more useful in some other country. U.S. is a nice choice here but if Trump becomes the president I may reconsider ;). 

         

      3. What do you do with TechNet Wiki, and how does that fit into the rest of your job? What is it about TechNet Wiki that interests you? 

      I do think that it is rather useful to have parts of your experience and thoughts reviewed by the community in order to connect your resume with a bigger real world (outside of a company). 

         

      4. Do you have any comments for product groups about TechNet Wiki? 

      It would be nice to have latest Microsoft cloud Business Intelligence and Collaboration technologies incorporated into TechNet wiki environment – right now I'm writing the interview in world online – it is much less stressful than the TechNet wiki editor ;). 

       

      5. Do you have any tips for new Wiki authors? 

      1. It is a well-known tip but again (and for myself): good to read Wiki: How to Contribute Content to TechNet Wiki.
      2. Word Online is pretty comfortable for writing .

       

      6. What could we do differently on TechNet Wiki?  

      1. As for my experience (and i saw some complaints from other members as well) the TechNet wiki editor seems to be a rather old one and not really stable - it requires quite a lot of attention (or knowledge) from a writer if to compare to other Microsoft technologies. It every time reminds me when we began to write articles in LaTeX, previously having only experience with Microsoft word - on one of the first pages of the manual there was a pretty nice phrase: "windows users harden by working with LaTeX" - it is a little bit confusing to have a similar harden experience today ;).  
      2. As a lot of people do I widely use google search engine and the TechNet wiki is a pretty hidden beast here as for my experience if to compare how often we see the content and how valuable it is – if something can be done here I think it would be useful. For example i came to the wiki from SSRS forum announcement - not from the common search I usually use in every day life – although I saw links to the wiki several time in the search the overall impression was like about something not really alive. 
      3. Think it was being discussed somewhere already but it seems that the navigation system of the TechNet wiki could be improved. Microsoft has beautiful data navigation tools in business intelligence cloud stack – good candidate to solve the task. 

       

      That is all as for today, thanks for reading and see you all at my site 

      sergeyavdovin.com 

      By the way, the site got one of the prizes (8) in the Russian Microsoft WebMatrix competition ;) 

      ===================================

      Well, thank you Sergey for such great answers and great contributions to SQL BI on TechNet Wiki! Also, I loved your answer to the first question, taking us on a tour of where you've lived and visited! It's great to get to know you!

       

      Everyone, please join me in thanking Sergey for his community contributions! 

       

      Have a fantastic new year in 2016!

         - Ninja Ed

       


      0 0

      Bonjour à tous,

      Si vous avez mis une GPO de verrouillage de compte au bout d’un certain nombre de tentatives, et que vous avez des verrouillages de compte souvent sur les serveurs membres Windows 10, suite à une migration de Windows 7 ou Windows 8, assurez-vous que vous avez installé les correctifs sur ces serveurs membres Windows 10.

      Vous pouvez le télécharger à partir de Windows Update, les détails sont dans cet article :

      Cumulative update for Windows 10: October 13, 2015

      https://support.microsoft.com/en-us/kb/3097617

       

       

      Cordialement,

      Huu-Duc LÊ


      0 0

      Summary: Matthew Hitchcock, Microsoft MVP, delves into how to troubleshoot problems in the Azure VMDSC Extension.

      If you joined me yesterday in Advanced Use of Azure Virtual Machine DSC Extensions, you saw how I created an advanced Desired State Configuration (DSC) file to configure my Azure VM. Taking parameter input means I can reuse my DSC file across different machines and different environments. Today I want to spend some time talking about troubleshooting because I recently experienced many hours of frustration with DSC on Azure VMs. Hopefully sharing what I learned can save you some pain.

      In my example yesterday, I gave the wrong value for a parameter so my configuration is failing. Let’s take a look at some of the ways we can troubleshoot Azure VM DSC Extension issues.

      Understanding what’s going on inside the VM

      Good troubleshooting starts with understanding what should be going on. To check that everything is healthy in the VM, we can:

      • Check the status and progress
      • Read the DSC log
      • Read the parameter values provided to the configuration

      Check the status and progress

      We can use the following command to examine the status of the VM DSC Extension (the equivalent to checking the Local Configuration Manager):

      Get-AzureVM -Service "MyService" -Name "MyVM" | Get-AzureVMDSCExtension

      This shows us the configuration that has been assigned and an indication of some of the parameters we specified.

      Image of command output

      Get-AzureVM -Service "MyService" -Name "MyVM" | Get-AzureVMDSCExtensionStatus

      Image of command output

      This gives us an indication of what stage the configuration is for the DSC Extension. We can examine the state, which will show if there has been an error. In addition, the last time stamp tells us when the last event happened.

      Unfortunately, there is not a great deal of information here. A status message of “Errors occurred while processing the configuration ‘Configuration Name’” generally indicates an issue in reading the PowerShell code you provided and applying it to a MOF file. In short, there is something wrong with your code.

      We’ll need to go through the DSC code with a bit of a fine-tooth comb. Fortunately, my configuration is still small.

      Image of command

      Ah ha! The issue is likely that I have a dependency set where the name doesn’t actually match! The DSC engine didn’t understand when it could run this part of the configuration, so it decided to stop before it did something bad.

      I can correct this as follows:

      Image of command

      I can use the same commands that I used yesterday to upload the configuration (using the -Force parameter to overwrite the existing one) and to assign it to the Azure VM. I’ll once again monitor the progress by using the Get-AzureVMDSCExtension and Get-AzureVMDSCExtensionStatus cmdlets. Have some patience...it can take some time to start working again.

      Image of command output

      Read the DSC log

      Reading the DSC log is the equivalent of using -Wait -Verbose when pushing a DSC file. As we saw when we used Get-AzureVMDSCExtensionStatus, there is a DSCConfigurationLog property. This is initially empty, but if there is a value in there, it means that the DSC file has been successfully converted into a MOF file and is running or has run. We can use the following command to list it in our session:

      Get-AzureVM -Service "MyService" -Name "MyVM" | Get-AzureVMDSCExtensionStatus | Select-Object -ExpandProperty DSCConfigurationLog

      The following image displays the output as would be seen in a push configuration using -Wait –Verbose:

      Image of command output

      We see that the issue is that the domain is not found. So how do we check which domain we assigned as a parameter?

      Read the parameter values provided to the configuration

      If we need to validate the parameter values that we specified when we assigned the DSC file to the virtual machine, we can use the following command:

      Get-AzureVM -Service "MyService" -Name "MyVM" | Get-AzureVMDSCExtension | Select-Object -ExpandProperty Properties

      Image of command output

      Seems like that’s where the issue is! We have a bad parameter value.

      Updating parameter values

      So, we have seen that the issue is a bad parameter value. My domain isn’t called hitchy.com. Oops!

      To update parameter values, I’ll go back to assigning the configuration to the VM as I did yesterday. This time, I’ll get it right. I don’t have to rush and scramble to do this—I can simply wait until the DSC Extension shows an error, which will stop the process. Then I can update the parameters in the original assignment command and run it again.

      Image of command output

      Now I can check again and see that the parameter set in the DSC agent is correct.

      Image of command output

      Great! We can now watch the DSC file complete by periodically using the Get-AzureVMDSCExtension cmdlet. Also remember that we can use the DSCConfigurationLog property again if we want to see a play-by-play of how it all happened.

      Image of command output

      When I see that it has successfully completed, I can log in with my domain credentials and verify that the share exists on the server. Job done!

      How I wish I had approached Azure DSC troubleshooting 

      If I could go back to when I started working with this and talk to my slightly younger self, I'd give myself the following advice:

      Be patient

      Desired State Configuration is all about “eventual consistency.” The script automation most of us are used to allows us to watch everything happening interactively. When I start a script, I am used to seeing it run and work. With DSC, we need to trust it to complete and have only a periodic peek.

      When I saw that a server had not been performing an action for some time, it was so tempting to “give it a kick.” I had to break that habit and resist, trusting in my LCM settings and lack of errors in logs and allow it to complete in its own time.

      Build configurations by testing incrementally

      As someone comfortable with DSC syntax, I flew into writing configurations for my Azure VMs. I wasn't really testing as I went because I knew the syntax. I had used it before and I couldn't see a reason why it wouldn't work.

      After writing quite monolithic configurations and starting to assign them to VMs, I was hitting errors (such as the incorrect dependency error I showed), which weren't that helpful. I couldn't understand where my code could possibly be wrong because everything looked the same as how I would write a standard DSC file that I would push to a machine running Windows or assign to a pull server.

      In the end, I went back to the basics, starting with an empty configuration, adding configuration elements one-at-a-time, and testing before I moved forward.

      Had I adopted this practice from the start, I would have achieved much more, much faster; sent many less ranting emails; and spent less time questioning my ability!

      Fall back to DSC within a Windows host to test

      When something that you added to a configuration causes it to fail, a great way to see what's wrong is by trying it in a push configuration within another Windows VM. Compile the configuration into a MOF file and push it to the local system (one you can test and break). Look for compilation errors when you create the MOF file, and then when you run the configuration, use the -Wait and -Verbose switches so you can see what fails.

      Stick with it

      The most important piece of advice I can give is to stick with it. Changing your habits to start configuring your infrastructure with code rather than manually is a hard thing to do. It can take longer to do the simplest tasks and it can be frustrating when error messages are not clear and you realize you have spent two days trying to join a machine to the domain.

      As tempted as I have been at times to "just do this manually and fix it later," I am glad I didn't. The rewards of breaking the manual habit and succeeding with automation are huge. The result of my personal struggles on a recent project mean that our teams around the world can spin up an environment to showcase a solution and show value faster. So I have to say it was worth it for me.

      That's all for today folks. I hope this helps you along with the DSC Extension and encourages you to try it. When you have your servers building themselves, it truly is magical. It also gives you more time to sharpen your skills as a reward for your work!

      If you have suggestions for additional troubleshooting actions that could be taken with the Azure VM DSC Extension, I would love to add them to my list. You can contact me on Twitter @hitchysg.

      ~Matthew

      Thanks for taking the time to share this information, Matthew.

      I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

      Ed Wilson, Microsoft Scripting Guy 


      0 0

      Exchange/Forefront サポート チームの伯谷です。
      本日を以て複数の Forefront 製品の通常サポートが終了となりました。

      Title: 2016 年 1 月 1 日、特定の Forefront ブランド ソリューションに対する Microsoft サポートが終了
      URL: https://support.microsoft.com/ja-jp/kb/3106598

      本ブログでご紹介してきた情報については、一部利用できないものもありますが、引き続きご参考としていただける情報もございます。
      まだ Forefront をアンインストールしていない環境では、是非過去の記事もご活用ください。

      サポートが終了した Forefront 製品では使用できない記事
      Title: Forefront 製品の診断ツール: FSCDiag
      URL: http://blogs.technet.com/b/exchangeteamjp/archive/2014/11/17/3641209.aspx

      Title: Forefront Protection 2010 for Exchange Server (FPE) のライセンスの使用期限の延長について
      URL: http://blogs.technet.com/b/exchangeteamjp/archive/2014/05/09/3629247.aspx

      Title: FPE で間違ってスパムとして判定されてたメールを誤検知として報告する
      URL: http://blogs.technet.com/b/exchangeteamjp/archive/2015/03/17/3646617.aspx

      Title: FPE で間違ってマルウェアとして判定されてたメールを誤検知として報告する
      URL: http://blogs.technet.com/b/exchangeteamjp/archive/2015/03/25/3647110.aspx


      引き続きご活用いただける記事
      Title: FPE がインストールされている Exchange サーバーのメモリ不足や CPU 高負荷の問題
      URL: http://blogs.technet.com/b/exchangeteamjp/archive/2014/05/14/3629562.aspx

      Title: Forefront 製品のサポート終了に伴う FPSP のアンインストール方法について
      URL: http://blogs.technet.com/b/exchangeteamjp/archive/2015/09/30/how-to-uninstall-the-forefront-protection-for-sharepoint.aspx

      Title: Forefront 製品のサポート終了に伴う FSSP のアンインストール方法について
      URL: http://blogs.technet.com/b/exchangeteamjp/archive/2015/12/04/how-to-uninstall-the-forefront-security-for-sharepoint.aspx

      Title: Forefront 製品のサポート終了に伴う FPE のアンインストール方法についてて
      URL: http://blogs.technet.com/b/exchangeteamjp/archive/2015/12/22/how-to-uninstall-the-forefront-protection-for-exchange.aspx

      特に "FPE がインストールされている Exchange サーバーのメモリ不足や CPU 高負荷の問題" 記事内の "対処方法について (緊急な対処が必要になった場合)" にある Microsoft Forefront Server Protection コントローラー サービスの再起動は、他の問題が発生した場合でも非常に有効な対処方法です。
      管理コンソールが起動できない時、エラーが大量に記録される時、パフォーマンスが悪く動作が遅い時など、大部分の事象をこの方法で回避した実績がございます。
      障害が発生し Forefront の切り離し (FSCUtility /disable) が必要になった場合には、切り離しの前に一度 Microsoft Forefront Server Protection コントローラー サービスの再起動をご検討ください。


      また Forefront をまだご利用されている環境ではタイムゾーンによっては 12/31 の日中に以下のようなメッセージが表示される場合があります。
      メッセージのとおり猶予期間中となりますので、引き続きライセンス済の状態と同様にご利用いただけますのでご安心ください。

      本ブログでも Forefront Protection 2010 for Exchange Server や Forefront Protection 2010 for SharePoint に関する情報を発信してきましたが、こちらが最後の記事になります。
      少しでもお客様環境の問題解決のお力になりたいと思い何度か投稿してきました。

      引き続き Exchange に関する情報については本ブログで発信いたしますので、今後も当ブログおよびサポート チームをよろしくお願いいたします。


      0 0
    • 12/29/15--02:11: Mon disque est plein!
    • Dans cet article, nous allons aborder le sujet des espaces disque.

      Il arrive qu'une partition se retrouve complètement remplie alors que nous étions persuadés qu'il lui reste encore pas mal d'espace libre.

      Différentes sources peuvent être à l'origine d'une telle situation:

      • Un snapshot VSS qui prendrait de la place
      • Des permissions qui manquent pour voir tous les fichiers
      • Des flux alternatifs
      • Des descripteurs de sécurité

      Nous ne détaillerons pas les deux premiers cas et nous intéresserons surtout aux deux derniers qui sont le moins connus.

      Les stream alternatifs, ou Alternate Data Stream (ADS)

      Dans cet exemple, nous avons une partition de 1 GB sur laquelle il ne reste plus que 93 MB de disponible.

      clip_image001

      Ceci est la vérité.

      Vous pouvez utiliser tous les outils que vous souhaitez, la vérité sera fournie par Windows, et il y a plusieurs façons de la voir:

      1. La copie d'écran ci-dessus

      2. Un click droit propriétés sur le volume vous indique également que 927 MB sont utilisés et qu'il ne reste que 93 MB de disponible

      clip_image002

      3. Sélectionner le contenu de la partition pour en lister le contenu donne également une idée de la situation car on constate que 3.47 MB de fichiers occupent 903 MB sur le disque:

      clip_image003

      clip_image004

      En général, nous sommes habitués à avoir le contraire avec la compression NTFS et ce n'est de toutes les façons pas le cas dans notre cas. Nous reviendrons plus tard sur ce point.

      4. Chkdsk est de toutes les manières l'outil ultime qui vous fournira la vérité absolue sur le taux d'utilisation de votre partition.

      clip_image005

      Le compte rendu est très important à lire, car il peut nous fournir l'indice d'une autre source de ce problème qui est fort peu connu : Le champ "in use by the system". Dans le cas présent cette partie est négligeable, et on voit bien que nous avons 32 fichiers qui occupent 940128 KB sur le disque.

      Revenons à la différence remontée dans la 3ième méthode : 3 MB de données occupant 903 MB sur le disque. Pour cet exemple (nous avons très peu de répertoires et de fichiers) il est facile de se rapprocher au plus près.

      Voyons ce fichier qui est sensé n'occuper que 18 Bytes mais qui occupe 497 MB sur le disque

      clip_image006

      Vu son contenu, on ne l'imagine pas occuper plus des 18 bytes mentionnés:

      clip_image007

      Le fin mot de l'histoire nous sera fournit par la commande "DIR" avec l'option "/R" qui a été rajoutée dans les dernières versions de Windows. Cette option permet de fournir les Alternate Data Stream qui pouraient être ajoutés à des fichiers. Sur les ancienes versions de Windows dans lesquelles nous ne pouvons pas utiliser "dir /R", il est possible d'utiliser streams.exe pour obtenir cette information. Cf : https://technet.microsoft.com/en-us/sysinternals/bb897440.aspx

      clip_image008

      Il faut savoir que chaque fichier possède un stream principal (grosso modo, son contenu) et potentiellement un stream secondaire, plus communément appelé Alternate Data Stream (ADS). Plusieurs articles traitent de ce sujet, dont https://msdn.microsoft.com/en-us/library/windows/desktop/aa364404(v=vs.85).aspx. De nombreuses applications peuvent utiliser ces ADS (Antivirus, SQL, etc.)
      Dans la copie d’écran ci-dessus: Fichier1.txt occupe 18 Bytes mais possède un ADS alt.txt qui occupe 520 MB, de même, le Fichier2.txt qui n’est sensé prendre que 88 KB possède un ADS de 420 MB. Les autres fichiers ne possèdent pas d’ADS.

      Dans la situation d'un volume contenant plusieurs milliers de fichiers, il est possible de lancer un "DIR /S /R" voire de faire un pipe sur une commande findstr pour avoir la liste des fichiers ne contenant des ADS (Alternate Data Stream).

      clip_image009

      L'outil stream.exe mentionné plus haut permet de supprimer ces ADS mais vous pouvez avoir envie de savoir ce qu'ils contiennent. Utilisons notepad par exemple

      clip_image010

      Il s'agit visiblement d'un petit malin qui nous a fait une mauvaise blague

      clip_image011

      Les Security Descriptors

      Prenons cette situation dans laquelle l'explorer nous montre 4 GB disponibles sur un disque de 99 GB

      clip_image012

      Passons outre les problèmes de permissions sur l'affichage et les snapshot VSS.

      Dans ce cas-là, la taille des fichiers est la même que celle occupée sur le disque, donc a priori pas un problème de stream. Un Dir /S /R | findstr /C:"$DATA" ne nous montrera de toutes les façons rien dans la situation présente.

      clip_image013

      Dans toutes ces situations, il faut lancer un chkdsk pour avoir l'image la plus proche de la réalité:

      The type of the file system is NTFS.
      Volume label is Application.
      ...
      104724479 KB total disk space.
      32643000 KB in 56668 files.
      4113996 KB in 4202 indexes.
      0 KB in bad sectors.
      59304299 KB in use by the system.
      65536 KB occupied by the log file.
      8663184 KB available on disk.

      Le point crucial dans ce rapport est le "in use by the system" qui occupe 56 GB de données.

      Cette partie là regroupe toutes les structures qui sont utilisées pour décrire les fichiers et répertoires. Ils ne sont pas mentionnés par un DIR et ne sont pas nécessairement résident dans la MFT. En effet, un record occupant généralement 1 Ko, il est possible que certaines informations ne soient pas stockées à même la MFT mais attachés au fichier lui-même. Par exemple les Security Descriptors.

      Ca tombe bien, nos amis dévelopeurs ont ajouté la fonction /sdcleanupà chkdsk et c'est typiquement l'opération qu'il fait: Un nettoyage des Security Descriptors.

      clip_image014

      Cette opération prendra du temps (dans le cas que j'ai rencontré plus de 5 heures) mais un chkdsk /f /sdcleanup nous a permis de passer de 56 GB à 2 GB.

      The type of the file system is NTFS.
      Volume label is Application.
      ...
      104724479 KB total disk space.
        33209888 KB in 44389 files.
           17592 KB in 4227 indexes.
               0 KB in bad sectors.
          217159 KB in use by the system.
           65536 KB occupied by the log file.
        71279840 KB available on disk.

      Voilà, vous savez tout. Nous aurons probablement un article sur VSS en cours d’année pour détailler un peu ce qu'on trouver dans le System Volume Information.

      Serge Gourraud

      55 AA


      0 0

      Olá Comunidade TechNet Wiki!

      Hoje é terça-feira, dia de Artigo Spotlight!

      E o destaque de hoje vai para o Artigo NodeJS com TypeScript e Task Runners - Visual Studio Code

      Criado pelo Colaborador 


      O ótimo artigo escrito pelo  com titulo NodeJS com TypeScript e Task Runners - Visual Studio Code demostra um desenvolvimento em varias plataformas que você pode desenvolver, aprendendo o que é cada aplicação.

      Nas palavras de nosso colaborador .

      TypeScript

      É uma linguagem open source criada pela Microsoft, que tem por objetivo criar aplicações JavaScript escaláveis, com uma sintaxe mais agradável e extensão de funcionalidades da linguagem. Basicamente o código escrito em TypeScript é transpilado (processo onde uma linguagem é transformada em outra em um nível similar de abstração -  Steve Fenton) para a linguagem JavaScript. 

      Visual Studio Code - Task Runners

      O Visual Studio Code, em sua atual versão 0.10.6, possui recursos para automatizar os processos de compilação de nosso TypeScript e tarefas em geral. Por padrão definidas em um arquivo de configuração do editor.

      Obrigado  pelas suas contribuições.

      Venha nos ajudar contribuindo com a comunidade Technet Wiki BR.

      Até a próxima!

         

      Até a próxima!

      Wiki Ninja Jefferson Castilho ( BlogTwitterWikiPerfil Facebook)


      0 0

      Hello and welcome everybody to our TNWiki Article Spotlight on Tuesday.

      Automation is an important part of the day to day work of administrators and developers. Some time ago we pointed out an article which shows how to start and stop VMs in Azure via Azure Automation. This is not the only thing you can do with Azure Automation. Imagine a situation where you have to create dozens of users in an Active Directory. You can do this by hand if you want, but can Azure Automation help at this point? Sure, it can. Daniel Örneling wrote an article to demonstrate how to automatically create AD users with Azure Automation and OMS. He starts with explaining the use case and why it is important. Then he goes directly into the demonstration. Step by step and with a lot of images showing what to do he goes through all the steps necessary to create an AD account with Azure Automation.

      If you like Azure and you want to automate annoying tasks, this article shows you the power of Azure Automation.

      - German Ninja Jan (TwitterBlogProfile)


    older | 1 | .... | 804 | 805 | (Page 806) | 807 | 808 | .... | 889 | newer