Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels

Channel Catalog

Channel Description:

Resources for IT Professionals

older | 1 | .... | 831 | 832 | (Page 833) | 834 | 835 | .... | 889 | newer

    0 0

    Microsoft is ensuring that customers can remain compliant with the current changes in guidance for using strong cryptography for implementing security control. A number of known vulnerabilities has been reported against SSL and earlier versions of TLS which has changed the security guidance to move to TLS 1.2 for support secure communication.

    There has been no known vulnerability that has been reported for Microsoft's TDS implementation which is the communication protocol used between SQL Server clients and the SQL Server database engine.

    As of January 29, Microsoft SQL Server supports TLS 1.2 for SQL Server 2008, SQL Server 2008 R2, SQL Server 2012 and SQL Server 2014 and major client drivers like Server Native Client, Microsoft ODBC Driver for SQL Server, Microsoft JDBC Driver for SQL Server and ADO.NET (SqlClient).

    You can read more about the release here. The list of builds that support TLS 1.2 along with the client and server component download locations is available in KB3135244.

    0 0
  • 02/02/16--09:30: El centro de datos submarino
  • Cuando pensamos en centros de datos nos imaginamos edificios llenos de metal y cables, además de mucha electricidad y energía para poder alimentar computadoras. Jamás nos imaginamos que en el océano, un contenedor que habita en las profundidades, sea el generador de ese poder computacional. La idea de poner bajo el mar un centro de datos, vino del equipo de Microsoft Research, a través del proyecto Natick. Este equipo buscó innovar la manera en la que se entrega cómputo en la nube, con la idea de...(read more)

    0 0

    In Microsoft System Center Configuration Manager version 1511, if you attempt to edit the properties of a Windows Store app (deep-link app), the Connect to Remote Computer dialog box is displayed instead of the expected Browse Windows App Packages dialog box when you click Browse on the Content tab.

    A supported hotfix that addresses this problem is available from Microsoft. For complete details and a download link, please see the following:

    3125909 - FIX: "Browse Windows App Packages" dialog box is missing from System Center Configuration Manager version 1511 (

    J.C. Hornbeck | Solution Asset PM | Microsoft


    Our Blogs

    0 0

    We’ve heard lots of requests for more details on how Operations Management Suite and System Center come together to give you a hybrid approach to IT management. On February 16, 2016, come learn about how you can extend System Center using the cloud-based capabilities of Operations Management Suite. The next webinar in our ongoing series covers the added value you can get from existing System Center investments by taking advantage of things like enhanced analytics, cross-cloud functionality, and automation of hybrid processes.

    In a hybrid world, IT operations management needs to evolve to bridge between on-premises resources and the cloud. In this webinar, Lindsay Berg and Won Huh will use technical scenarios and demos to show the connections between the familiar tools of System Center and the new functionality available in Operations Management Suite. Join us to get a drill-down view.

    Register for the webinar here.

    0 0

    Anteprima in Italano.


    In riferimento a questo articolo, abbiamo  discusso e accennato che cosa e Internet of Things, quali sono gli strumenti di sviluppo, abbiamo fatto alcuni esempi di cosa e possibile eseguire e infine abbiamo parlato della Raspberry Pi2,  sulla quale ci baseremo anche nei successivi articoli. In questa sezione, vedremo come installare Windows 10 Iot core, un sistema operativo creato e pensato da Microsoft per le board come la Rasperry Pi2, la MinnowBoard MAX e una delle ultime la DragonBoard 410c. Ultimissima novità, che tratteremo in futuro e il kit Toradex, con la quale possiamo sempre creare applicazioni con Windows 10 IoT Core,le UWP e  sfruttare tutto l'Hardware messo a disposizione nel kit. Ma veniamo a noi, in quest'articolo, vedremo come copiare Windows 10 IoT core, partiremo dall'Hardware necessario, per poi passare all'installazione del sistema operativo su micro sd con un tool denominato Windows  IoT Core Image Helper, necessario per trasferire il tutto come detto su micro sd. Terminata questa fase, non ci resta che inserire la micro sd nell'apposito slot sulla Raspberry Pi2, e collegare la scheda alla rete elettrica mediante il suo alimentatore, alla rete e ad un monitor con cavo Hdmi. Seguirà la configurazione della scheda Raspberry Pi2 con Powershell, personalizzeremo il nome del pc. Terminata l'installazione e configurazione siamo pronti per la creazione della nostra prima applicazione, ma questo lo vedremo nel successivo articolo.

    Per continuare a leggere seguite questo link   

    Buona lettura :)

    Preview in English.


    In reference to this article, we discussed and mentioned what and Internet of Things, which are development tools, we have some examples of what you can do and finally we talked about the Raspberry Pi2, on which we will build in subsequent articles. In this section, we will see how to install Windows 10 Iot core, an operating system created and designed by Microsoft to the board as the Rasperry Pi2, the MinnowBoard MAX and one of the latest DragonBoard 410C. Latest news, which we will cover in the future and the Toradex kit, with which we can always create applications with Windows 10 IoT Core, the UWP and exploit all the hardware available in the kit. But let us, in this article we will see how to copy Windows 10 IoT core, will depart from the hardware necessary, and then proceed to install your operating system on micro sd with a tool called Windows IoT Core Image Helper, you need to transfer the all as I said on micro sd. After this stage, we just have to insert the Micro SD slot on Raspberry Pi2, and connect the adapter to the mains via its power supply, the network and monitor with a HDMI cable. Follow board configuration Raspberry Pi2 with Powershell, we will customize the name of the PC. After installation and configuration we are ready to create our first application, but we will see in the next article.

    To continue reading please follow this link   

    0 0

    La semana pasada se realizó el Halo World Championship Tour: X Games Aspen Invitational 2016 , donde se entregaron premios por un total de 30 mil dólares. Este evento fue el sueño convertido en realidad de miles de fans y seguidores de la saga de Halo ya que, a pesar de las bajas temperaturas de la sede, el calor inundó las estaciones de batalla por el drama que se vivió en los tres días donde se realizaron las eliminatorias. El equipo de Xbox Wire...(read more)

    0 0


    Hello everyone! My name is Palash Acharyya and today I am going to talk about a support case I worked where it was figured out later that vendor DSM may turn out to be a better solution when their storage is used in an environment consisting of virtual HBA based VMs running on Windows Server 2012/2012 R2.


    When moving a VM from one host to another host, Hyper-V does not maintain the VM’s MPIO path’s TPGID/RTPID. Unknown to the VM’s DSM (e.g. MSDSM), a given path may now be in a different target port group. This can cause the VM to incorrectly think a Standby path is actually Active Optimized (and vice-versa). I/O can then get routed down a Standby path. This leads to significantly delayed I/O performance within the VM. In a worst case scenario, we have seen Live Migration failures.

    We take an example of Nimble storage in this case. The issue is resolved in their DSM by monitoring for TPGID/RTPID changes.

    At times, we might see significantly decreased I/O performance in the VM after a Live Migration. On a rare occasion, we also saw live migration failures. We frequently see TPGID/RTPID mismatch which is the root cause of this issue (detailed below). Typically, there is no error message other than the I/O performance to the SAN is degraded. After a live migration, mpclaim reports the following for MPIO Disk 0:

    C:\Users\Administrator>mpclaim -s -d 0

    MPIO Disk0: 08 Paths, Round Robin with Subset, Implicit Only
    Controlling DSM: Microsoft DSM
    SN: ED8C7618566F92206C9CE900A9C2D800
    Supported Load Balance Policies: FOO RRWS LQD WP LB

    Path ID State SCSI Address Weight
    0000000077060001 Standby 006|000|001|000 0
    TPG_State : Standby , TPG_Id: 1, : 4

    0000000077060000 Active/Optimized 006|000|000|000 0
    TPG_State : Active/Optimized , TPG_Id: 2, : 8

    0000000077050001 Standby 005|000|001|000 0
    TPG_State : Standby , TPG_Id: 1, : 2

    0000000077050000 Active/Optimized 005|000|000|000 0
    TPG_State : Active/Optimized , TPG_Id: 2, : 6

    0000000077040001 Active/Optimized 004|000|001|000 0
    TPG_State : Active/Optimized , TPG_Id: 2, : 5

    0000000077040000 Standby 004|000|000|000 0
    TPG_State : Standby , TPG_Id: 1, : 1

    0000000077030001 Active/Optimized 003|000|001|000 0
    TPG_State : Active/Optimized , TPG_Id: 2, : 7

    0000000077030000 Standby 003|000|000|000 0
    TPG_State : Standby , TPG_Id: 1, : 3

    Above you see eight paths to the volume. Four Active Optimized and four Standby paths. Note the TPG_Id values. These values never change during the lifetime of the volume (at least not with MSDSM). After a Hyper-V Live Migration, we noticed that some of those TPG_Id values were no longer correct. So, Nimble storage developers wrote an in-house tool to submit a VPD 83h down each path and display the “true” TPG_Id. When tested with MSDSM, we find the following:

    System Disk, Path ID, SCSI Address, TPG_Id
    2, 77060001, 006|000|001|000, 1:4
    2, 77060000, 006|000|000|000, 2:8
    2, 77050001, 005|000|001|000, 1:2
    2, 77050000, 005|000|000|000, 2:6
    2, 77040001, 004|000|001|000, 2:5
    2, 77040000, 004|000|000|000, 1:1
    2, 77030001, 003|000|001|000, 1:3
    2, 77030000, 003|000|000|000, 2:7

    Above we see that Path ID 77030001 and 77030000 are now out of sync. The DSM thinks Path ID 77030001 is at TPG_Id 2:7 but it’s now at 1:3. Likewise, the DSM thinks Path ID 77030000 is at TPG_Id 1:3 but it’s now at 2:7. This will cause the DSM to incorrectly route I/O down a Standby path about 25% of the time (i.e. 1 of 4 active paths are out of sync).
    Both Hyper-V hosts running Server 2012 R2 Standard with all updates. Guest VM failures seen with 2008 R2, 2012, or 2012 R2.

    Terms used above:

    DSM: DSM or Device Specific Module incorporates knowledge of the manufacturer's hardware. It interacts with the MPIO driver.

    TPGID/RTPID: Target Port Group ID


    More Information:

    Inside Page 18 of publicly available "MPIO Users Guide for Windows Server 2012", it is very clearly mentioned that:

    Determining whether to use the Microsoft DSM vs. a Vendor’s DSM

    To determine which DSM to use with your storage, refer to information from your hardware storage array manufacturer. Multipath solutions are supported as long as a DSM is implemented in line with logo requirements for MPIO. Most multipath solutions for Windows today use the MPIO architecture and a DSM provided by the storage array manufacturer. You can use the Microsoft DSM provided by Microsoft in Windows Server 2012 if it is also supported by the storage array manufacturer. Refer to your storage array manufacturer for information about which DSM to use with a given storage array, as well as the optimal configuration of it.

    NOTE: Multipath software suites available from storage array manufacturers may provide an additional value-add beyond the implementation of the Microsoft DSM because the software typically provides auto-configuration, heuristics for specific storage arrays, statistical analysis, and integrated management. We recommend using the DSM provided by the hardware storage array manufacturer to achieve optimal performance because the storage array manufacturer can make more advanced path decisions in their DSM that are specific to their array, which may result in quicker path failover times.

    The default output of Get-MPIOSetting will have PathVerifyEnabled set to 0. Even if we enable it and increase the PathVerificationPeriod via registry, it is better to leave it to the vendor's judgment and use their DSM instead.

    ~ Palash

    This information is provided‘as-is’ with no warranties

    0 0

    Summary: Learn about the default parameter values in Windows PowerShell.

    Hey, Scripting Guy! Question How can I find more about default parameter values in Windows PowerShell?

    Hey, Scripting Guy! Answer Use the Get-Help cmdlet and search for *defaultParameter*. The following command returns a
               list of Help topics that provide this information:

    help *DefaultParameter*

    0 0

    Tracking down the devices locking out accounts on an ADFS deployment is quite challenging. From an ADDS perspective, lockouts coming from a WAP server will look like they're come from an ADFS server:

    Lockouts coming from internal client using Form Based authentication also look like they are coming from the ADFS server itself and not the device.

    What can I do?

    You can have fun with the security event logs of the ADFS servers and fish for the right information. Quite perilous eh? First thing to do is to ensure we capture the information. So we need to enable the audit on your ADFS servers. Two things to do to achieve that:

    1. Configure the auditing on the ADFS farm:
    2. Configure the OS of the ADFS server to audit application generated events:

    You could do it with a domain group policy and ensure that all your ADFS servers have the same configuration. If you want to go geek, here is PowerShell to enable the audit on your ADFS farm: Set-ADFSProperties–LogLevelInformation,Errors,Verbose,Warnings,FailureAudits,SuccessAuditsand here is the command line you can run locally on the server if you want to enable this kind of audit: auditpol.exe /set /subcategory:”Application Generated” /failure:enable /success:enable

    Now we will have security events containing IP address when an account gets locked out (we'll see which one later). Note that because of the load balancing, you cannot predict on which ADFS server the authentication will take place. So all the methods described in this article are looking at event logs on all servers in the farm.

    I use multiple devices at home

    If the device is behind a NAT, the source IP address of the lockout will just tell us that it is coming from your home, and not tell us if it comes from your tablet, Xbox or fancy Windows Phone 10. Having the source IP isn’t the panacea, you also want the device identity. That, unless you are using Workplace Joined devices, isn’t possible. What we can do though, is getting the UserAgent string of the client and hope that it provides us with enough information to distinguished the device. Could you tell which UserAgent string is my Windows 10 and which one is my Windows Phone?

    • Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko
    • Mozilla/5.0 (Windows Phone 10.0; Android 4.2.1; Microsoft; Lumia 950 Dual SIM) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2486.0 Mobile Safari/537.36 Edge/13.10586

    Well, you guessed that the first one was my Windows 10 laptop and the second one was my fancy Windows Phone 10 (interestingly, the browser of my Windows Phone also advertise about the fact it could be an Android 4.2.1, a Chrome or a Safari).

    Scenario 1: I am using the Extranet Lockout feature

    If you are not familiar with this feature, you can read this excellent post. In a nutshell, we are locking the account on the ADFS server before it gets locked on the ADDS infrastructure, avoiding potential password discovery attack from being successful. For that we read the badPwdCount attribute from the PDC (note that if the PDC is not reachable during the attempt, it will fail regardless of the password provided by the user and its status -locked out or not, see this article for details). This affect only password based authentication attempts coming from a WAP server (for internal client, the ADDS account lockout policy still applies). The issue with this feature is that if the user gets locked out on the ADFS server only, you will not find a trace of a user being locked out in the ADDS servers. You will find the previous failed attempts but still, the address will show that it is coming from the ADFS server.
    When a user is locked out on the ADFS server because of this feature, it generates the following event:

    As you can see, the 516 does contain interesting information such as the username, the external IP address of the device, the value of the badPwdCount, the date and time of the lockout and what WAP server it is coming from. However, it does not tell the UserAgent of the device. The event 403 does:

    But do you really want to parse your event logs and try to match events manually amongst hundreds of thousands other events? Probably not. If we look at the 516, we also have an activity ID. This activity ID will be included in all other ADFS audit events related to the same activity. So if we take the activity ID of the 516 and look for 403 carrying the same, we’ll match the UserAgent to our lockout.

    Here is an example of PowerShell script looking for all user lockout events on all server and match it with the UserAgent. It will should you the time of the lockout, the external IP as well as some information about the device thanks to the UserAgent string.

    #list all servers of your ADFS farm
    $_all_adfs_servers = "",""
    #XML filter that look for the event 516 in the security event logs coming from ADFS
    $_xml_lockout_adfs = "<QueryList><Query Id=""0"" Path=""Security""><Select Path=""Security"">*[System[Provider[@Name='AD FS Auditing'] and (EventID=516)]]</Select></Query></QueryList>"
    #List all server
    $_all_adfs_servers | ForEach-Object `
        #for each server query the event logs looking for the last 100 events for lockout
        $_server = $_   
        Get-WinEvent -ComputerName $_server -FilterXml $_xml_lockout_adfs -MaxEvents 100 | ForEach-Object `

            #Extract the operation ID
            $_operation_id_adfs = $_.Properties[0].Value
            #Showthe details of the event
            Write-Output "Server:`t$_server"
            Write-Output "Account:`t$($_.Properties[1].Value)"
            Write-Output "ExternalIP:`t$($_.Properties[2].Value)"
            Write-Output "DateTime:`t$($_.Properties[4].Value) $($_.Properties[5].Value)" 
            #Craft another XML filter to look for event 403 that have the operation ID matching the one of the 516
            $_xml_lockout_adfs_useragent = "<QueryList><Query Id=""0"" Path=""Security""><Select Path=""Security"">*[System[Provider[@Name='AD FS Auditing'] and (EventID=403)]] and *[ EventData[ Data and (Data='$_operation_id_adfs') ] ]</Select></Query></QueryList>"
           Get-WinEvent -ComputerName $_server -FilterXml $_xml_lockout_adfs_useragent -MaxEvents 1 | ForEach-Object `
                   #Display the UserAgent
                   Write-Output "UserAgent:`t$($_.Properties[8].Value)"
        Write-Output "--"

    And here is the output:

    Account: ad\jean
    DateTime: 2/2/2016 7:16:15 PM
    UserAgent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko

    The above PowerShell is quite basic, no error management, no user input, etc. You can go fancy and make a more sophisticated version! Why only the external IP address and not the internal one in the event the lockout comes from an internal connection? The Extranet lockout feature, as the name suggests, only works for extranet connection coming from the WAP.

    Scenario 2: I am not using the Extranet Lockout feature

    In this case the account is going to be locked out on the ADDS servers. So you will find the event 4740 on your domain controller, but you will not find the event 516 on your ADFS servers. So what will you see in the logs? This:

    Great, we can lookup up on the username and will get the Activity ID and thanks to the Activity ID we will track down this to the UserAgent string. The problem is that the username is displayed the same way that the user typed it in. So if the user typed or AD\jean or aD\Jean or Jean@ad.Piaudonn.Com, these are all different strings... So the first thing to do to is to look up for the actual username typed in by the user. For that we need to extend our previously set audit capabilities. We will need the event 4625 to be logged in the ADFS server. If the user tried to log in with the username AD\JeAn, the event will show it:

    If the user typed it will look like this:

    This event is keeping the case. To enable this audit on all our ADFS server (not the ADDS servers), we activate the following audit category:

    (technically we can enable only the Failure, but Success does not generated noise)

    So here is the logic:

    1. Get the actual username input from the event ID 4625
    2. Look for the event 411 that contains that username and retrieve the activity ID
    3. Look for failed authentication related to that activity ID

    How to automate this? Let's look for all locked out accounts listed in all ADFS server and prompt you to choose what lockout event you wish to see additional information for:

    #Define all your ADFS servers
    $_all_adfs_servers = "",""
    #XML filter to look for the event 4625
    $_xml_lockout = "<QueryList><Query Id=""0"" Path=""Security""><Select Path=""Security"">*[System[Provider[@Name='Microsoft-Windows-Security-Auditing'] and Task = 12546 and (EventID=4625)]]</Select></Query></QueryList>"
    #Pick one is used to store the user's input
    $_pick_one = @()
    #List all locked out event on all servers
    $_all_adfs_servers | ForEach-Object `
        $_server = $_   
        #List all the event 4625
        Get-WinEvent -ComputerName $_server -FilterXml $_xml_lockout -Oldest -MaxEvents 100 | ForEach-Object `
            #We check what is the username input
            If ( $_.Properties[6].Value -ne "" )
                $_target_account = "$($_.Properties[6].Value)\$($_.Properties[5].Value)"
            } Else {
                $_target_account = $_.Properties[5].Value
            $_pick_one += New-Object -TypeName psobject -Property @{
                Server = $_server
                Time = $_.TimeCreated
                Account = $_target_account
    #Display all the results
    $_inc = 0
    $_pick_one | ForEach-Object `
        $_display_cases = $_pick_one[ $_inc ]
        Write-Host "$_inc`t-`t$($_display_cases.Server)`t$($_display_cases.Time)`t$($_display_cases.Account)"
    #Ask the user to chose (here we need to do some parsing of the input, it is not done as today
    $_picked_inc = Read-Host "Select a lockout event (from 0 to $($_inc - 1))"
    #Once we picked, we look at the info of the lockout using the right username and get the operation ID
    $_picked = $_pick_one[ $_picked_inc ]
    $_xml_account = "<QueryList><Query Id=""0"" Path=""Security""><Select Path=""Security"">*[ EventData[ Data and (Data='$($_picked.Account)-The referenced account is currently locked out and may not be logged on to') ] ]</Select></Query></QueryList>"
    $_get_operation = Get-WinEvent `
        -MaxEvents 1 `
        -ComputerName $_picked.Server `
        -FilterXml $_xml_account
    $_operation_id = $_get_operation.Properties[0].Value
    #Look for event 410 and 413 containing the same Activity ID than the lokout event
    $_xml_operation = "<QueryList><Query Id=""0"" Path=""Security""><Select Path=""Security"">*[ EventData[ Data and (Data='$_operation_id') ] ] and *[System[(EventID=410) or (EventID=403)]]</Select></Query></QueryList>"
    $_get_info = Get-WinEvent `
        -ComputerName $_picked.Server `
        -FilterXml $_xml_operation
    #Display the results
    $_get_info | ForEach-Object `
        If ( $_.ID -eq 410 )
            Write-Output "DateTime: `t$($_picked.Time)"
            Write-Output "Server:   `t$($_picked.Server)"
            Write-Output "Account:  `t$($_picked.Account)"
            Write-Output "ExternalIP:`t$($_.Properties[10].Value)"
            Write-Output "WAPServer: `t$($_.Properties[12].Value)"
        If ( $_.ID -eq 403 )
            Write-Output "UserAgent:`t$($_.Properties[8].Value)"
            Write-Output "InternalIP:`t$($_.Properties[2].Value)"

    Here is the output:

    0 - 02/02/2016 19:07:33 ad\jean
    1 - 02/02/2016 19:07:34 ad\jean
    2 - localhost 02/02/2016 19:07:33 ad\jean
    3 - localhost 02/02/2016 19:07:34 ad\jean

    Select a lockout event (from 0 to 3): 0

    DateTime:  02/02/2016 19:07:33

    Account:   ad\jean
    WAPServer:  adfsproxy01
    UserAgent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko

    Again, as previously, the script is doing the minimal work, not even checking if the user's input is correct. So please, go fancy and improve it :)

    If the user is connected internally, the script still works.

    My External IP is always the IP address of the WAP server

    I'll be brief on that section since it isn't really an ADFS issue. If there is some NAT in front of your WAP load balancer farm, the incoming connections are actually coming from the WAP server itself and you'll see the X-MS-Forwarded-Client-IP with the internal IP of the WAP server. In that case, you'll have to look at a way to get that info right with your load balancer provider. Some of them supports SNAT and will be able to help in that situation. So bing it! :)

    0 0

    Sigcheck v2.5
    This update to Sigcheck, a command-line utility that reports detailed information about images, including their signatures and VirusTotal status, as well as certificate stores, now reports all the signatures of images that have multiple signers.

    Sysmon v3.21
    This update fixes a paged pool leak of token objects when image logging is enabled. 

    Process Explorer v16.11
    This release of Process Explorer, a powerful process management utility, fixes a bug that caused it to crash when it encountered an image with a path length longer than a few thousand characters.

    Whois v1.13
    Whois, a command-line utility that reports domain name ownership information for the specified name or IP address, now includes a fix for a bug that would cause it to crash when passed an IP address with no DNS mapping.

    RAMMap v1.5
    This update to RAMMap, a utility that shows detailed information about physical memory usage, works on the latest version of Windows 10.

    0 0

    The Enhanced Mitigation Experience Toolkit (EMET) benefits enterprises and all computer users by helping to protect against security threats and breaches that can disrupt businesses and daily lives. It does this by anticipating, diverting, terminating, blocking, or otherwise invalidating the most common actions and techniques adversaries might use to compromise a computer. In this way, EMET can help protect your computer systems even from new and undiscovered threats before they are formally addressed by security updates and antimalware software.

    Today we are pleased to announce the release of EMET 5.5, which includes the following new functionality and updates:

    • Windows 10 compatibility
    • Improved configuration of various mitigations via GPO
    • Improved writing of the mitigations to the registry, making it easier to leverage existing tools to manage EMET mitigations via GPO 
    • EAF/EAF+ pseudo-mitigation performance improvements
    • Support for untrusted fonts mitigation in Windows 10

    Mitigations in Windows 10

    EMET was released in 2009 as a standalone tool to help enterprises better protect their Windows clients by providing an interface to manage built-in Windows security mitigations while also providing additional features meant to disrupt known attack vectors used by prevalent malware. Since that time,  we have made substantial improvements to the security of the browser and the core OS. With Windows 10 we have implemented many features and mitigations that can make EMET unnecessary on devices running Windows 10. EMET is most useful to help protect down-level systems, legacy applications, and to provide Control Flow Guard (CFG) protection for 3rd party software that may not yet be recompiled using CFG.

    Some of the Windows 10 features that provide equivalent (or better) mitigations than EMET are:

    Device Guard: Device Guard is a combination of enterprise-related hardware and software security features that, when configured together, will lock a device down so that it can only run trusted applications. Device Guard provides hardware-based zero day protection for all software running in kernel mode, thus protecting the device and Device Guard itself from tampering, and app control policies that prevent untrusted software from running on the device.

    Control Flow Guard (CFG): As developers compile new apps, CFG analyzes and discovers every location that any indirect-call instruction can reach.  It builds that knowledge into the binaries (in extra data structures – the ones mentioned in a dumpbin/loadconfig display).  It also injects a check, before every indirect-call in your code, that ensures the target is one of those expected, safe locations.  If that check fails at runtime, the operating system closes the program.

    AppLocker: AppLocker is an application control feature introduced in Windows 7 that helps prevent the execution of unwanted and unknown applications within an organization's network while providing security, operational, and compliance benefits. AppLocker can be used in isolation or in combination with Device Guard to control which apps from trusted publishers are allowed to run.

    For more information on Windows 10 security features please review the Windows 10 Security overview whitepaper on TechNet.

    EMET 5.5 and Edge

    Given the advanced technologies used to protect Microsoft Edge, including industry leading sandboxing, compiler, and memory management techniques, EMET 5.5 mitigations do not apply to Edge.


    For support using EMET 5.5, please visit


    The EMET team

    0 0
  • 02/02/16--17:50: Query for memory slot types
  • I was working with a customer to figure out how many different variations of computer memory card slots they had in the environment.  Turns out there are many (DIMM A, DIMM_A, Slot 1, Slot_1, Top Slot, Bottom Slot, etc). The query is a very specific use case but putting it here in case it helps you out.  If nothing else it's a sample on how to do counts. 

    SELECT DISTINCT COUNT(DeviceLocator0) DeviceLocator0
    FROM            v_GS_PHYSICAL_MEMORY
    GROUP BY DeviceLocator0

    0 0

    Hello Readers, I'm here with a very interesting new topic to present my views on. While, Disaster Recovery isn't a new topic for professionals & I'm sure many of you might have been involved in building or managing DR setup. As times have changed & Cloud is inevitable, there are amazing things happening in the space of offering DR as a service, aka DRaaS . Before I write any further, lets understand very briefly why should any one care form a technology point of view...(read more)

    0 0

    2016 年 4 月 7 日 (木) 発売予定の Xbox One 専用アクション アドベンチャー『Quantum Break』の新規トレーラー「目的」とゲームプレイ映像を公開。

    時、それはチカラ ― タイムトラベル実験の失敗により、時間そのものが崩壊した。特殊な能力を得たジャック・ジョイスは、破滅の世界を迎える前に未曾有の危機に時空を超えて、冷酷な組織モナーク ソリューションズとの戦いに挑む。時間が増幅された世界で放つ極限のゲーム アクションと実写サスペンス ドラマが融合した新体験のアクション アドベンチャー『Quantum Break』。プレイヤーの感情は、行動や選択が生み出す劇的な展開にすべてが奪われる。

    Quantum Break – 目的

    (Please visit the site to view this video)

    Quantum Break - ゲームプレイ映像

    (Please visit the site to view this video)


    Quantum Break Web サイト


    Quantum Break プレス素材

    0 0
  • 02/02/16--22:40: TechNet a MSDN Fóra
  • Máte technický dotaz týkající se Microsoft produktů nebo technologií? Zeptejte na na našich TechNet (IT odborníci) nebo MSDN (vývojáři) Fórech a máte jisototu, že odpověď na váš dotaz dostanete. Odpovídat vám budou nejen kolegové z vašich řad, profesionálové oceněni titulem Most Valuable Professional nebo Microsoft Student Partner, ale i odborníci ze společnosti Microsoft. Znáte i Vy odpověď na některou z otázek na Fóru? Podělte se o své znalosti s ostatními. TechNet Fórum MSDN Fórum - Irena...(read more)

    0 0

    As we documented in the deployment guide , the first preview release of Azure Stack doesn't support NVMe drives.

    However we heard some customers might want to deploy Azure Stack TP1 on NVMe drives.

    Actually that limitation is only applied on the first preview version and we definitely will support NVMe drives in the coming release. In this blog I will talk about how to modify the deployment scripts and enable the NVMe support in Azure Stack Technical Preview 1.

    A. Enable NVMe and other bus type support

    In order to support NVMe, the first thing you need to do is modify the Pre-check and allow the bus type "NVMe"

    1. Mount MicrosoftAzureStackPOC.vhdx in the downloaded package.

    2. Open X:\AzureStackInstaller\PoCDeployment\Invoke-AzureStackDeploymentPrecheck.ps1 with PowerShell ISE.

    3. In line 62, add the highlighted part.

    $physicalDisks = Get-PhysicalDisk | Where-Object { $_.CanPool -eq $true -and ($_.BusType -eq 'RAID' -or $_.BusType -eq 'SAS' -or $_.BusType -eq 'SATA' -or $_.BusType -eq 'NVMe') }

    By default Azure Stack Technical Preview 1 only supports HDD or SSD+HDD configuration. It doesn't support All-Flash (All SSD)* or NVMe+SSD. The reason is when we enabled the storage space direct ("Enable-ClusterS2D"), we didn't specify any parameter. Storage Space Direct will use those NVMe drives and SSDs as cache devices instead of persistent storage. In order to support the following storage configuration, we need to modify the deployment scripts and append different parameters. For more information, please refer to Claus Joergensen’s blog.

    • B: NVMe+HDD
    • C: NVMe+SSD
    • D: All NVMe

    *Notes: There is one exception here. If you're using non pass thru bus type (e.g., RAID, iSCSI or File Backed Virtual), Storage Space Direct could not recognize the media type. It will mark all the disks as "Unspecified" instead of HDD or SSD. In that case, Storage Space Direct will not use those drives as cache device even if you didn't disable the S2DCacheMode.

    B. Enable the Tiered Storage with different bus type

    In NVMe+HDD configuration, we don't need specify any parameter. However since NVMe drives' bus type is NVMe and normally HDD's bus type is SAS or SATA. By default deployment script will only deploy Azure Stack on the drives with same bus type. So besides to enable NVMe support in Pre-Check, you also need to modify the script “X:\AzureStackInstaller\PoCFabricInstaller\CreateStoragePool.ps1” and remove the highlighted part.

    Line 23:

    $pdisks = $pdisks | ? { $_.CanPool -eq $true -and $_.BusType -eq $DiskType }
    LogDisks "Disks have been picked up" $pdisks

    for ($i=0; $i -lt 3 -and $pdisks -eq $null; $i++) {
      sleep 5
      $pdisks = $clussubsystem | Get-PhysicalDisk | ? { $_.CanPool -eq $true -and $_.BusType -eq $DiskType }

    Now you may deploy Azure Stack TP1 w/ NVMe+HDD configuration.

    C. Deploy on Tiered Storage with NVMe and SSD drives

    In order to support NVMe+SSD configuration, besides follow the steps in the section A and B, you also need to modify the script X:\AzureStackInstaller\PoCFabricInstaller\CreateFailoverCluster.ps1 and add the highlighted part.

    Line 123:

    Enable-ClusterS2D -S2DCacheDevice NVMe

    D. Deploy on All-NVMe Storage

    In order to support All NVMe configuration, you need to follow the steps in Section A besides follow the steps in section A first, then modify the script X:\AzureStackInstaller\PoCFabricInstaller\CreateFailoverCluster.ps1 and add the highlighted part.

    Line 123:

    Enable-ClusterS2D -S2DCacheMode Disable

    Now you may dismount the DataImage VHD (MicrosoftAzureStackPOC.vhdx) and kick off the deployment.

    0 0

    Summary: Learn how to install the Windows PowerShell ISE Preview edition from the PowerShell Gallery to Windows PowerShell 5.0 by using a one-line command.

    One of the way cool things is that the Windows PowerShell ISE is released to the PowerShell Gallery. “PowerShell Gallery?” you might ask. Yeah, the PowerShell Gallery.

    Although this version of the PowerShell ISE is currently still under limited preview, this does not mean that you can’t use it. In fact, you should be using it. I go to the PowerShell Gallery page, and type ISE in the Search box, and what comes back is a bunch of stuff related to the Windows PowerShell ISE:

    Image of menu

    I am interested in the PowerShell ISE-Preview, so I click it and it brings up the PowerShell ISE Preview page. But that page does not really tell me very much about the PowerShell ISE—as to why I would be interested in it, or even what it might do. It does tell me that it requires Windows PowerShell 5.0, and that this is version 5.1.1, and that two days ago there was a version, so I know that it is actively being developed.

    There are also two functions listed:

    • Start-ISEPreview
    • Install-ISEPreviewShortcut

    Here is the PowerShell Gallery page for the ISE Preview 5.1.1:

    Image of menu

    Use Find-Module

    Because I am using Windows PowerShell 5.0 in Windows 10, I already have Windows PowerShell cmdlets that let me interact with the PowerShell Gallery. In fact, using Windows PowerShell is the best way to interact with the Gallery, because as we just saw, the web pages do not seem to contain tons of information. The Windows PowerShell team blog has much more information about it in their initial announcement: Introducing the Windows PowerShell ISE Preview.

    I decided to use Find-Module to find the ISE-related modules. I type the following command:

    Find-Module *ise*

    The Windows PowerShell console tells me that I need a NuGet provider. It really doesn’t matter what that is—I need it so that I can interact with the PowerShell repository. I can say Yes, and it automatically installs and imports the provider. Cool. Here is the console:

    Image of command output

    Now that is cool. It downloaded, installed, and imported the provider, and then ran my query, which it remembered. Sweet! this is shown here:

    Image of command output

    Now all I have to do is to run Install-Module. A bummer thing is that Tab expansion doesn’t seem to work, and I feel kind of abused having to type a really long name like PowerShellISE-preview. But hey, if it really annoyed me that much, I could have copied the name or even used a wildcard character. I install the PowerShell ISE Preview in my current user scope. Here is the command:

    Install-Module -Name PowerShellISE-preview -Scope CurrentUser

    It tells me that I am installing from an untrusted repository. It also says that I can add the PowerShell Gallery as a trusted repository by using Set-PSRepository. I will do that later, but for now, I am going to say Yes and do the installation. A progress bar appears, and that is it. Here is the Windows PowerShell console at this point:

    Image of command output

    Launch the PowerShell ISE Preview

    To launch the Windows PowerShell ISE Preview edition, I type isep (ise preview) at the Windows PowerShell console:

    PS C:\> isep

    The ISE Preview launches and it looks like this:

    Image of menu

    Cool. Join me tomorrow when I will begin to explore the preview edition of the Windows PowerShell ISE.

    I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at, or post your questions on the Official Scripting Guys Forum. Also check out my Microsoft Operations Management Suite Blog. See you tomorrow. Until then, peace.

    Ed Wilson, Microsoft Scripting Guy 

    0 0


    Script Download:  
    The script is available for download from  You can also use  Microsoft Script Browser for Windows PowerShell ISE to download the sample with one button click from within your scripting environment. 

    This PowerShell script sample shows how to fix 'there are no Remote Desktop License Servers available to provide a license' issue.

    You can find more All-In-One Script Framework script samples at

    0 0


    Av: Pål H. Aaserudseter,Sales Engineer Riverbed Technology.


    En ny dag for ytelse i skyen

    At Azure vokser er det ingen tvil om. Presentasjoner fra AzureCon forteller oss at med 1.5 Millioner SQL databaser, 500 millioner brukere i Azure Active Directory og med en vekst på 90.000 brukere pr måned så satser Microsoft tungt på skytjenester.

    I tillegg rulles det ut nye tjenester hele tiden, som IoT, Big Data, M2M læring m.m i et tempo som kan få sluttbrukerne til å si; Hei! Brems litt a’.

    Når man implementerer skytjenester, følger også infrastrukturen for nettverk med på lasset. De forsinkelser som måtte befinne seg i lokalnettet blir til forsinkelser på Internettet.

    Lokal lagring blir til sky-lagring og oppgaven med å flytte tjenester ut i skyen kan i enkelte tilfeller kreve enorme mengder data som skal transporteres, noe som igjen vil legge ekstra trykk på allerede overbelastede, virksomhetskritiske nettverk.

    Normalt sett ønsker man seg nok stadig mer fart, bedre ytelse, forutsigbar responstid for sine tjenester og applikasjoner. For kunder med kontorer over hele verden er ikke dette alltid mulig, grunnet infrastruktur i landet de befinner seg i, fysisk avstand til tjenesten de ønsker å benytte samt andre geografiske og ikke minst politiske forhold.

    Det er her Riverbed kommer på banen. Med vår SteelHead teknologi kan vi drastisk redusere båndbredden benyttet og tiden det tar for å transportere data til og fra Azure!


    Riverbeds teknologi for WAN optimalisering passer perfekt for mange av tjenestene som benyttes i Azure og ikke minst Office 365. HTTP/HTTPS, samt CIFS/SMB optimalisering er innebygget I vår SteelHead løsning (i tillegg til over 1400 andre applikasjoner og protokoller), så de aller fleste web og fil baserte overføringer til og fra Azure blir optimalisert.

    Dette inkluderer Sharepoint og andre IaaS tjenester, Azure web services, Azure site recovery m.m. For å få bedre ytelse, raskere respons og mindre transport av data trenger man en SteelHeadi hver ende av trafikken.

    I Azure kan du installere en SteelHead direkte fra Azure Gallery (Marketplace)!




    Det er selvsagt tilgjengelig informasjon om hvordan man setter dette opp (Deployment Guide), slik at en Azure SteelHead kan snakke med en “on-prem” SteelHead, enten den er fysisk eller virtuell.





    Når to SteelHeads snakker sammen, kan Azure trafikk reduseres drastisk.

    Dette tillater mer trafikk på ditt eksisterende nettverk (du trenger ikke høyere båndbredde) og i de aller fleste tilfeller vil det også gi mye bedre ytelse for applikasjonene bedriften benytter.



    Microsoft har testet Riverbed løsningene

    Resultat: Optimaliseringen overgår forventningene!

    Microsoft testet Riverbed SteelHead løsningen for å redusere trafikk i forbindelse med Azure Site Recovery og fant ut at med SteelHead optimaliseringen ble trafikken typisk redusert med 50% eller mer. Altså, resultatet overgikk forventningen. Dette er ikke unikt for Azure Site Recovery. Du kan forvente lignende resultater for mange typer tjenester og applikasjoner.


    Riverbed – Ytelse, Synlighet og Kontroll

    Å være opptatt av sluttbrukeropplevelsen, ytelse for applikasjoner og nettverk, synlighet i applikasjoner, nettverk og infrastruktur samt kontroll på dine data, være seg i eget datasenter eller i skyen, er det Riverbed driver med.

    Vi har løsningen du er på jakt etter når det går “tregt” eller du ikke finner “feilen”.

    Vil du vite mer, så kontakt en av våre autoriserte forhandlere, snakk med din Microsoft representant eller kontakt oss direkte.

    Vår portefølje innenfor Ytelse, Synlighet og Kontroll kan du lese mer om på


    Både vi og Microsoft er på NIC – er du?

    Er du på NIC i år, så finner du både oss og Microsoft der også.

    Er du nysgjerrig på hva løsningen vår kan gjøre for deg? Vil du ha mer informasjon? Eller vil du bare slå av en prat?

    Kom gjerne innom vårt område - og gå for all del ikke glipp av foredraget vår CTO, Hansang Bae holder.

    0 0

    Опубликовано обновление модуля «Расчеты с персоналом» для версии AX 2009 SP1, содержащее изменения в форме отчетности 2-НДФЛ.

    Обновление содержит следующие изменения:

    • Добавлен новый параметр для указания социальных налоговых вычетов;

    • Добавлено новое поле в карточке Сотрудника для указания налогового номера в стране гражданства;

    • В налоговом реестре и на форме индивидуальной печати 2-НДФЛ добавлены поля «Номер корректировки» и «Отмена» для формирования корректирующих и отменяющих справок;

    • Обновлена печатная форма 2-НДФЛ;

    • XML формат справки 2-НДФЛ обновлен до версии 5.4.

      Обновление доступно по следующей ссылке

      Обновление также доступно по следующей прямой ссылке

older | 1 | .... | 831 | 832 | (Page 833) | 834 | 835 | .... | 889 | newer