Quantcast
Channel: TechNet Blogs
Viewing all 17778 articles
Browse latest View live

New Azure Jumpstarts on Microsoft Virtual Academy

$
0
0

Microsoft Virtual Academy (MVA) offers online Microsoft training delivered by experts to help technologists continually learn. MVA is free of charge for partners and offers courses based on the wide range Microsoft product.

In January MVA delivered Azure trainings throughout a whole week and now your partners can benefit from the online on-demand sessions!

 

Azure Week MVAs:

Get   Started with Windows Azure Today Jump Start

Designing   Applications for Windows Azure Jump Start

Building   Modern Web Apps Jump Start

Windows   Azure IaaS Deep Dive Jump Start

Mobile   Apps to IoT: Connected Devices with Windows   Azure     


Windows Azure services now generally available in new Japan cloud region

$
0
0

The following post is from Takeshi Numoto, Corporate Vice President, Cloud and Enterprise marketing, Microsoft.


Less than a year ago, we announced Windows Azure services for Japan to support the growing demand for Microsoft’s cloud services. I am pleased to share that Windows Azure in Japan East (Saitama Prefecture) and Japan West (Osaka Prefecture), will be generally available later Tuesday.

Japan’s cloud market has grown considerably in the last few years and this growth is only expected to increase, with the projected 2014 forecast estimated at around $1.6 billion, according to IDC. These new regions will help fulfill the current and future needs of our cloud customers with secure and highly available services that help them grow their business. In addition, they provide local customers the ability to achieve data residency and realize data recovery scenarios, as data will be replicated between the two regions.

Building an enterprise class cloud infrastructure that exceeds our customers’ expectations is our ultimate goal. At Microsoft, we are making this goal real by bringing a deep enterprise legacy to our customers worldwide. We have already invested more than $15 billion in our cloud infrastructure, providing more than 200 cloud services to more than 1 billion customers in 90 markets around the world. Now, we are furthering our commitment to grow our cloud business and deliver a consistent experience across Windows Azure, partner-built and on-premises clouds.

With demand for Windows Azure increasing so significantly, we’re doubling capacity every six to nine months. Furthermore, in Japan alone, storage usage for Windows Azure has grown 10x in the last 15 months. As we continue at this growth rate, we will work with our customers and partners to ensure that we provide the value and support needed. We look forward to growing Japan’s cloud market, offering customers new options while helping push cloud adoption forward across the globe.

Windows Azure services now available in Japan

$
0
0

In a post Tuesday over on The Official Microsoft Blog, Microsoft Corporate Vice President of Cloud and Enterprise Marketing Takeshi Numoto announced that Windows Azure services are now available in Japan East (Saitama Prefecture) and Japan West (Osaka Prefecture).

“Japan’s cloud market has grown considerably in the last few years and this growth is only expected to increase, with the projected 2014 forecast estimated at around $1.6 billion, according to IDC. These new regions will help fulfill the current and future needs of our cloud customers with secure and highly available services that help them grow their business,” Numoto wrote.

Head on over to The Official Microsoft Blog to get the full story.

You might also be interested in:

· More health care pros than ever choose Surface as their go-to device for better patient care
· Joe Belfiore shares Windows and Windows Phone updates at Mobile World Congress
· Bing adds 15 more cities to explore in 3D, including San Francisco and Seattle

Jeff Meisner
Microsoft News Center Staff

Terça-Feira - Artigo Spotlight - A Hora do SQL Server

$
0
0


Tive a oportunidade de destacar diversas Tecnologias no Blog e chegou a hora de indicar alguns artigos sobre um produto Microsoft que eu adoro. É utilizado pela grande maioria de nós, mas nem todos tem consciência disto.

No Artigo Spotlight de hoje vou destacar alguns artigos publicados recentemente sobre SQL Server.

Hoje em dia, toda à informação esta vinculada à algum tipo de banco de dados e o SQL Server ocupa um lugar de destaque nesta categoria por suas características e um excelente custo/benefício.

Muitos não conhecem todos os seus recursos e acreditam que seja um produto supervalorizado.

Na verdade seu pacote de ferramentas e serviços, incluindo sua vasta documentação, agregam um valor pouco comparado com outros servidores de bancos de dados relacionais.

Veja abaixo alguns artigos criados recentemente sobre SQL Server e T-SQL (Transact-SQLé a linguagem utilizada pelo SQL Server) foram destaque nas últimas semanas com vários acessos da Comunidade TechNet Wiki Brasil.



- SQL 2012: Contained Database

SQL Server 2014 - Buffer Pool Extension - BFE

Como extrair dados em XML para atender os requisitos de um Schema

Cloud Híbrida – SQL Server 2014 Backup para Windows Azure

Trocando o Collate de um Banco de Dados SQL Server

Identity e Sequence no Hekaton - SQL Server 2014



Trabalhar com dados e transformar conteúdos estáticos em algo realmente valioso e rápido de localizar. Este é um pouco deste mundo chamado SQL Server. 


Colabore mais e mostre seu conhecimento para ajudar à todos na Comunidade !

Até +,

Wiki Ninja Durval Ramos ( TwitterPerfil )

Tip of the Day: Multimon and the Start Screen

$
0
0

Today’s Tip…

If you are like me and most of my coworkers, you like using multimon to increase your desktop real estate. My favorite configuration is one larger monitor in the center, flanked by two smaller ones.

clip_image001

One thing I’ve noticed in doing this is sometimes my Start screen will end up on the middle monitor, and sometimes it will show up on either of the end monitors.

clip_image002

Recently I started to wonder (that’s how a lot of tips get created) about how I could pick what monitor the Start screen would appear on. I’d like to have it running on the right side monitor.

The easiest way I found to do this is to hover in the lower left corner of the monitor that I select and get the mini-Start screen.

clip_image003

Then just click on it to bring up the Start screen.

Doing so would pop the Start screen up on that monitor.

clip_image004

Students Get Your Code On! – Windows 8 APP Madness Challenge – Check it out and your chance to Win!

Wonder what the innovators do in their off time? They tinker. In The Garage.

$
0
0

clip_image002

Some of the company’s – if not the world’s – brightest minds don’t just turn off when the workday is done. They head to The Garage, a sanctuary for creativity and problem solving that harkens back to Microsoft’s start-up roots.

It’s here, fueled by Manny’s and pepperoni pizza, that that the Mouse Without Borders was born, and the idea for the forgotten Outlook attachment reminder was hatched.

...(read more)

How to certify your career progression in 2014.

$
0
0

clip_image002

By Edward Jones, Firebrand Training

Whether seeking a promotion, career change or salary boost, certification is a great way of making it happen. Employers and recruiters have long considered certification as a crucial part of the hiring process. After all, what better way to validate your skills and expertise, than to attain an industry-recognised certification awarded by an independent body?


Benefits backed by numbers
This is not just idle speculation, CompTIA, the IT Industries non-profit trade association published their Employer perceptions of IT training and certification report, which highlights some compelling figures:

  • An overwhelming eighty six per cent of recruiters singled out IT certifications as medium to high priority when evaluating candidates.
  • IT Professionals gained an average nine per cent salary increase immediately after receiving a certification, and twenty nine per cent in the long term, versus uncertified colleagues.

So if you’re thinking about certifying your route to career progression in 2014, and you work with Microsoft Technology, have a look at some of the following options available to you. We’ll start at entry level and move through to advanced.

Beginning your career with Microsoft technology
Cloud is not the only area of demand, European Commission research predicts a shortfall of 700,000 IT roles by 2015, IT skills are in short supply. If you’re looking to begin a career working with Microsoft Technology today by attaining the Microsoft Technology Associate (MTA).

There are three tracks available to you:

  1. Database – this is the first step for those looking to build a career in data platform administration or business intelligence working with Microsoft SQL Server.
  2. IT Infrastructure– if you’ve ever dreamed of working with computers and servers, prepare for your first role working with Windows Server and Desktop operating systems.
  3. Developer – want to build websites, applications or games using Microsoft technology like Windows Azure ore Visual Studio. The developer track will prepare you for what lies ahead.

If paying for training to attain the MTA isn’t an option, you could always look at taking a Firebrand IT Apprenticeship. If you’re between the ages of 16-24, you can apply for the programme to secure employment and get paid while you attain the MTA certifications.

The MTA provides a much-needed entry level introduction to Microsoft technology, setting you up for the Microsoft Certified Solutions Associate (MCSA) track. You can read more in this MTA case study.

A stepping stone
If you have experience working with Microsoft technology as a computer support specialist, system administrator or database developer/analyst, the MCSA certification reinforces your experience and proves you have the core technical skills required to build a long-term career in IT.

MCSA tracks focus on the core Microsoft technologies; Windows Server 2008 & 2012, SQL Server 2008 & 2012 and Windows 7 & 8, and provide great stepping stone to the Microsoft Certified Solutions Expert (MCSE) certification. Those considering the certifications should be aware that the Microsoft Official Courseware (MOC) will be changing over the course of 2014 with updates to Windows Server 2012 R2, Windows 8.1 and SQL Server 2014.

Playing catch up with Cloud
The implementation of Cloud technology is spreading like wildfire; demand for qualified IT professionals has now outpaced human resource, opening up a skills gap. A joint report between the IDC and Microsoft, predicts that by 2015, there will be 1.4 million additional Cloud related roles.

You can prepare yourself for the current demand by attaining the MCSE: Private Cloud Certification. By proving you can build and maintain a private cloud utilising Windows Server 2012 & System Center 2012, you will be eligible for a variety of roles including that of a Server Administrator, Systems Programmer or Network Manager.

Be prepared with the Microsoft Virtual Academy
Whether you’re starting out on the MTA, focussing on the MCSE or simply looking to brush up on the latest technology, the Microsoft Virtual Academy (MVA) is a fantastic resource. Gain free access to thousands of hours of exclusive learning material on all the key technologies, straight from the experts.

Better still, the beginning of March sees the launch of the MVA ‘Hero’ campaign, where you can earn a range of fantastic prizes whilst preparing yourself for Microsoft certifications to boost your career.

So what are you waiting for?

Start working towards that certification in 2014 and take your career to the next level. For those looking to fast track their route, Firebrand Training offer the full range of Microsoft certification’s in an accelerated format.

Become an MTA in 4-6 days, or take the combined MCSA & MCSE and be certified in SQL Server 2012 or Windows Server 2012 in just 15 days.

Get learning…

Author
As part of Firebrand's global marketing team, Edward Jones actively works to serve the IT community with news, reviews and technical how to guides. Working in the Industry for almost 3 years, Edward has a wide variety of experience with Microsoft Technologies including SharePoint and Windows Server and Exchange Server. Edward is an active member of the IT community contributing to a variety of tech publications including Entrepreneur, Channel Pro and PC Advisor among others.

 

A very thorough and informative article from Ed.  If you’re just starting out and fancy dipping your toes in the water… The TechNet team are running upcoming IT Career Evenings in London. The perfect opportunity to network and get a better understanding.


Database Availability Groups – Storage Swing Migrations

$
0
0

In some circumstances it becomes necessary to migrate users quickly between servers within Database Availability Groups.  In some instances move mailbox is not an option.  When storage can be maintained we can utilize the swing method to complete the migration.

 

Some instances where these instructions have been implemented include:

 

  • Replacement of off lease hardware (for example nodes) where additional storage does not exist for move mailbox or to replicate database copies.
  • Upgrading nodes to new operating systems by migrating storage / databases from a previous version of windows and Exchange to a new version of Windows with same Exchange version.

 

The swing method involves identifying a database copy to be moved between nodes, migrating the storage between nodes, and then migrating additional copies of the database between nodes. 

 

There are several considerations when deciding to implement the storage swing migration.  Some of these considerations include:

 

  • The complexity of the steps and the ability to test prior to implementing against production users.
  • The loss of all lagged database copies.
  • The need for end user downtime to complete the transition.
  • Maintaining enough storage to hold all log files during the transition process due to log file truncation being blocked either by circular logging or backups.
  • Content indexes must be completely rebuilt once databases and storage are migrated.

 

In this article we have implemented the following architecture for testing and documentation:

 

image

 

Step 0:  Ensure all support teams are aware of the actions to be performed.

 

It is important that all support teams are prepared for the actions to be taken in these steps.  Ensuring that storage can be migrated quickly is paramount to reducing downtime.  Also depending on your hardware, additional steps may need to be performed to ensure that storage can be imported.  For example, in DAS environments when moving storage chassis between nodes, RAID configurations must be imported into new controllers (as opposed to SAN environments, where LUNs can be mapped to different servers).

 

Step 1:  Ensure storage has appropriate labels.

When creating partitions, you can assign labels. Maintaining meaningful labels on the storage helps you to be aware of the volumes you are working with. You cannot rely on the disk numbers because as storage is migrated between servers, different disk numbers may be assigned. But when disks are moved between servers, disk labels are maintained.

 

In this example, I have generic volume names on my server – for example Data0 / Data1 / Data2…etc. 

 

image

 

These volume names are generic and do not help identify the data contained within these volumes.  Using the Disk Management tool, I can change the volume names to something more meaningful, such as an indication of the data being stored.

 

image

 

image

 

Naming volumes in this manner can reduce potential confusion when migrating storage between servers.

 

Step 2:  Create new database objects on the new database availability groups for the databases that will be migrated.

 

In this step, you create new database objects on the target DAG. I recommend that these new databases be created on the same DAG member.  This will serve as the node where the initial storage migration will occur and where you should be able to quickly restore services.  In larger DAGs, where there is no single node that will host all database copies, you can spread the new databases out across the DAG members.  Ensure to keep track of these database locations so storage is migrated to the appropriate servers.

 

In the example below, I create the new mailbox databases, but I don’t specify a path for log files or databases.  Thus, these databases are created at the default location (%ProgramFiles%\Microsoft\Exchange Server\v14\Mailbox). The databases are not mounted at this point, and therefore no storage is being used. I am creating the databases and then allowing sufficient time for Active Directory replication and for the Microsoft Exchange Replication and Information Store services to detect the databases.

 

[PS] C:\>New-MailboxDatabase -Name NEW-DB0 -Server MBX-2A

Name                           Server          Recovery        ReplicationType
----                           ------          --------        ---------------
NEW-DB0                        MBX-2A          False           None

[PS] C:\>New-MailboxDatabase -Name NEW-DB1 -Server MBX-2A

Name                           Server          Recovery        ReplicationType
----                           ------          --------        ---------------
NEW-DB1                        MBX-2A          False           None

I can validate the configuration with the following command:

 

[PS] C:\>Get-MailboxDatabase -Server MBX-2A -Status | fl name,*mounted*,*path*

Name                    : NEW-DB0
MountedOnServer         : MBX-2A.exchange.msft
Mounted                 : False
EdbFilePath             : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB0\NEW-DB0.edb
LogFolderPath           : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB0
TemporaryDataFolderPath :

Name                    : NEW-DB1
MountedOnServer         : MBX-2A.exchange.msft
Mounted                 : False
EdbFilePath             : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB1\NEW-DB1.edb
LogFolderPath           : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB1
TemporaryDataFolderPath :

Step 3:  Remove all truncate and replay lags from database copies.

In this step, you disable truncation and replay lag settings from all database copies that have them configured.  It is necessary to have all databases up to date before migrating them to new storage.  By disabling replay and truncation lag settings, the administrator can decrease the amount of downtime required to move storage between nodes. Truncation and replay lag settings are dynamic, and when disabled, log file replay and log file truncation will start on lagged copies at the next Replication service update cycle.  Sufficient time should be allowed between this step and the day of migration in order to allow any lagged copies to replay outstanding log files.

 

In the following example, databases assigned to MBX-1C are databases with lagged copies:

 

[PS] C:\>Get-MailboxDatabaseCopyStatus *

Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                              Length    Length                             State
----                                          ------          --------- ----------- --------------------   ------------
DB0\MBX1-A                                    Mounted         0         0                                  Healthy
DB1\MBX1-A                                    Healthy         0         0           1/28/2014 1:59:43 PM   Healthy
DB1\MBX-1B                                    Mounted         0         0                                  Healthy
DB0\MBX-1B                                    Healthy         0         0           1/28/2014 1:36:57 PM   Healthy
DB0\MBX-1C                                    Healthy         0         267         1/28/2014 1:36:57 PM   Healthy
DB1\MBX-1C                                    Healthy         0         253         1/28/2014 1:59:43 PM   Healthy
NEW-DB0\MBX-2A                                Dismounted      0         0                                  Unknown
NEW-DB1\MBX-2A                                Dismounted      0         0                                  Unknown

 

To disable the lagged copy we utilize the set-mailboxdatabasecopy command:

 

set-mailboxdatabasecopy DB0\MBX-1C –replayLagTime 0.0:0:0 –truncationLagTime 0.0:0:0

set-mailboxdatabasecopy DB1\MBX-1C –replayLagTime 0.0:0:0 –truncationLagTime 0.0:0:0

Eventually, the queues should decrease down to zero.

 

[PS] C:\>Get-MailboxDatabaseCopyStatus *

Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                              Length    Length                             State
----                                          ------          --------- ----------- --------------------   ------------
DB0\MBX1-A                                    Mounted         0         0                                  Healthy
DB1\MBX1-A                                    Healthy         0         0           1/28/2014 1:59:43 PM   Healthy
DB1\MBX-1B                                    Mounted         0         0                                  Healthy
DB0\MBX-1B                                    Healthy         0         0           1/28/2014 1:36:57 PM   Healthy
DB0\MBX-1C                                    Healthy         0         0           1/28/2014 1:36:57 PM   Healthy
DB1\MBX-1C                                    Healthy         0         0           1/28/2014 1:59:43 PM   Healthy
NEW-DB0\MBX-2A                                Dismounted      0         0                                  Unknown
NEW-DB1\MBX-2A                                Dismounted      0         0                                  Unknown

Depending on the duration of the original lag settings, this procedure could take several hours or days to complete. Once the replay queues are at zero, the lagged copy is considered successfully disabled.

 

Step 4:  Validate database copy health

This step needs to be performed immediately before migrating storage.  It is imperative that all database copies be healthy prior to proceeding with further steps. You can use Get-MailboxDatabaseCopyStatus to validate that all database copies are healthy.

 

[PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX1-A

Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                              Length    Length                             State
----                                          ------          --------- ----------- --------------------   ------------
DB0\MBX1-A                                    Mounted         0         0                                  Healthy
DB1\MBX1-A                                    Healthy         0         0           1/29/2014 5:32:30 AM   Healthy

[PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX-1B

Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                              Length    Length                             State
----                                          ------          --------- ----------- --------------------   ------------
DB1\MBX-1B                                    Mounted         0         0                                  Healthy
DB0\MBX-1B                                    Healthy         0         0           1/29/2014 5:32:22 AM   Healthy

[PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX-1C

Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                              Length    Length                             State
----                                          ------          --------- ----------- --------------------   ------------
DB0\MBX-1C                                    Healthy         0         0           1/29/2014 5:32:22 AM   Healthy
DB1\MBX-1C                                    Healthy         0         0           1/29/2014 5:32:30 AM   Healthy

 

If any database is unhealthy, it needs to be fixed.  In some instances, that may require reseeding which can take several hours. Appropriate time should be allotted to ensure that any remediation steps can be followed.

 

Step 5:  Dismount databases and validate database dismount

Next, the databases that will be migrated are dismounted. They will need to remain dismounted until the storage has been successfully migrated, and services are restored in the new database availability group.

 

Use Dismount-Database to dismount the databases:

 

[PS] C:\>Dismount-Database DB0 -Confirm:$False
[PS] C:\>Dismount-Database DB1 -Confirm:$False

 

Use Get-MailboxDatabase –Status to verify the databases are dismounted.

 

[PS] C:\>Get-MailboxDatabase DB0 -Status | fl *mounted*

MountedOnServer : MBX1-A.exchange.msft
Mounted         : False

[PS] C:\>Get-MailboxDatabase DB1 -Status | fl *mounted*

MountedOnServer : MBX-1B.exchange.msft
Mounted         : False

 

Step 6:  Ensure log file copy and replay has completed.

 

There can be conditions that occur between validating copy status and dismounting databases that do not result in all log files being copied to the passive node. So in this step, you manually copy all log files to the passive node.

 

Using an administrative command prompt on the server hosting the passive copy, navigate to the log file directory for a database you are migrating.

 

Execute a command similar to the following – robocopy \\<ActiveNode>\<Drive$>\<log folder path> . /E  (This example assumes the command prompt is already present in the target log file directory)

 

F:\DB1>robocopy \\mbx-1b\f$\DB1 . /e /xf *.chk

------------------------------------------------------------------------------
   ROBOCOPY     ::     Robust File Copy for Windows

------------------------------------------------------------------------------

  Started : Wednesday, January 29, 2014 9:50:13 AM
   Source : \\mbx-1b\f$\DB1\
     Dest : F:\DB1\

    Files : *.*

Exc Files : *.chk

  Options : *.* /S /E /DCOPY:DA /COPY:DAT /R:1000000 /W:30

------------------------------------------------------------------------------

                         364    \\mbx-1b\f$\DB1\
        *EXTRA Dir        -1    F:\DB1\incseedInspect\
100%        New File               1.0 m        E01.log
                           0    \\mbx-1b\f$\DB1\IgnoredLogs\
                           0    \\mbx-1b\f$\DB1\inspector\

------------------------------------------------------------------------------

               Total    Copied   Skipped  Mismatch    FAILED    Extras
    Dirs :         3         0         0         0         0         1
   Files :       364         1       363         0         0         0
   Bytes :  362.00 m    1.00 m  361.00 m         0         0         0
   Times :   0:00:00   0:00:00                       0:00:00   0:00:00

   Speed :            22310127 Bytes/sec.
   Speed :            1276.595 MegaBytes/min.
   Ended : Wednesday, January 29, 2014 9:50:13 AM

 

This will ensure that any missing log files, as well as the updated ENN.log, are available on all copies for replay.

 

Step 7:  Replay all log files into databases.

Next, use eseutil to replay all log files into all database copies.  This will ensure that all copies are up to date prior to migrating storage to the remote nodes.  This step will be performed on all servers hosting a passive or active database copy.

 

Launch an administrative command prompt and navigate to the log file directory.

 

Run eseutil /r ENN where ENN is the first three digits of the log prefix for that log sequence.

 

F:\DB1>eseutil /r e01

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 14.03
Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating RECOVERY mode...
    Logfile base name: e01
            Log files: <current directory>
         System files: <current directory>

Performing soft recovery...
                      Restore Status (% complete)

          0    10   20   30   40   50   60   70   80   90  100
          |----|----|----|----|----|----|----|----|----|----|
          ...................................................

Operation completed successfully in 21.938 seconds.

 

When completed on all database copies we can proceed to the verification step next.

 

Step 7:  Validate database headers.

 

Once all log files have been copied and replayed, you must ensure that all database copies reflect this work. Compare the database headers of each database to ensure that they are equal. Specifically, compare the attributes LastConsistent and LastDetached, which you can view using eseutil.

 

To dump the header of the database utilize eseutil /mh.

 

E:\DB0>eseutil /mh DB0.edb

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 14.03
Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...
         Database: DB0.edb

DATABASE HEADER:
Checksum Information:
Expected Checksum: 0x1c1c8028
  Actual Checksum: 0x1c1c8028

Fields:
        File Type: Database
         Checksum: 0x1c1c8028
   Format ulMagic: 0x89abcdef
   Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,17
Engine ulVersion: 0x620,17
Created ulVersion: 0x620,17
     DB Signature: Create time:01/27/2014 12:18:49 Rand:3626382 Computer:
         cbDbPage: 32768
           dbtime: 962549 (0xeaff5)
            State: Clean Shutdown
     Log Required: 0-0 (0x0-0x0)
    Log Committed: 0-0 (0x0-0x0)
   Log Recovering: 0 (0x0)
  GenMax Creation: 00/00/1900 00:00:00
         Shadowed: Yes
       Last Objid: 7494
     Scrub Dbtime: 0 (0x0)
       Scrub Date: 00/00/1900 00:00:00
     Repair Count: 0
      Repair Date: 00/00/1900 00:00:00
Old Repair Count: 0
  Last Consistent: (0x1E6,1,2CA)  01/29/2014 10:57:52
      Last Attach: (0x16A,1,270)  01/29/2014 10:12:49
      Last Detach: (0x1E6,1,2CA)  01/29/2014 10:57:52
             Dbid: 1
    Log Signature: Create time:01/27/2014 12:18:48 Rand:3652040 Computer:
       OS Version: (6.2.9200 SP 0 NLS ffffffff.ffffffff)

Previous Full Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Incremental Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Copy Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Differential Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Current Full Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Current Shadow copy backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

     cpgUpgrade55Format: 0
    cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0

       ECC Fix Success Count: none
   Old ECC Fix Success Count: none
         ECC Fix Error Count: none
     Old ECC Fix Error Count: none
    Bad Checksum Error Count: none
Old bad Checksum Error Count: none

  Last checksum finish Date: 00/00/1900 00:00:00
Current checksum start Date: 00/00/1900 00:00:00
      Current checksum page: 0

Operation completed successfully in 0.125 seconds.

 

After dumping the header of each database copy for the same database, compare the LastConsistent and LastDetach times.  If these times are equal across all copies of the same database, then log file copy and log file replay were successful.

 

DB0\MBX-1A

Last Consistent: (0x1E6,1,2CA)  01/29/2014 10:57:52
Last Detach: (0x1E6,1,2CA)  01/29/2014 10:57:52

 

DB0\MBX-1B

Last Consistent: (0x1E6,1,2CA)  01/29/2014 10:57:53
Last Detach: (0x1E6,1,2CA)  01/29/2014 10:57:53

 

DB0\MBX-1C

Last Consistent: (0x1E6,1,2CA)  01/29/2014 10:57:52
Last Detach: (0x1E6,1,2CA)  01/29/2014 10:57:52

If any of the copies does not equal for any reason, the database should be mounted on the source server, and then start back at Step 4 of this document. If all database headers are equal, proceed with storage migration.

 

Step 8:  Migrate storage to the new node.

 

When moving storage to the new server, start by migrating from a server hosting the passive database copy. In this example, I will focus on DB0 which was passive on server MBX-1B. The steps to move storage between servers depend on your storage implementation, and as such they are not covered in this article.  These steps should have been tested and validated prior to this point.

 

I recommend migrating the storage from a single node first. This allows the original active database and storage to remain intact in case there are any issues with the storage migration or the database on the target server. After services have been established on the target server the additional databases and storage can be migrated.

 

In this example, the new databases were created on server MBX-2A. I am moving the storage from MBX-1B to MBX-2A.  After bringing the disks online on MBX-2A using the Disk Management tool, appropriate drive letters or mount points can be assigned.  IMPORTANT:  note the drive letters and paths used in this procedure.  You will need to repeat this step on other servers using the exact same paths. Failure to implement the same drive letters or paths will result in failure of subsequent steps.

 

Step 9:  Mount the migrated database on the new node.

In Step 2 above, you created your database objects. In this step, match one of those databases to the files that were moved from the original DAG. First, use Set-Mailbox to configure the allowFileRestoreFlag on each database.

 

Using set-mailbox we will set the allowFileRestoreFlag on each database.

 

[PS] C:\>Set-MailboxDatabase -Identity NEW-DB1 -AllowFileRestore:$TRUE

 

Once the allowFileRestoreFlag has been set, change the database and log file paths for the new database object to match the migrated storage.  It is very important that when setting the EDB file path that you use the correct file name.  The paths do not have to be and may not be the same as they were on the original server depending on the configuration used in Step 8.

 

Use Move-DatabasePath to set the database and log file paths, as shown below.

 

[PS] C:\>Move-DatabasePath NEW-DB1 -LogFolderPath f:\DB1 -EdbFilePath g:\DB1\DB1.edb -ConfigurationOnly:$TRUE -Confirm:$FALSE

Confirm
This operation will skip the safety check and make the change to Active Directory directly. Do you want to continue?

 

Be sure to allow ample time for Active Directory replication to occur. Then, mount the database using Mount-Database.

 

[PS] C:\>Mount-Database NEW-DB1

 

If the command completes successfully, the database mount status can be verified with Get-MailboxDatabase –Status.

 

[PS] C:\>Get-MailboxDatabase -Identity NEW-DB1 -Status | fl *mounted

MountedOnServer : MBX-2A.exchange.msft
Mounted         : True

 

Although the database is mounted, mailboxes still reference the original dismounted database in the original database availability group.

 

Step 10:  Move mailboxes to reference the migrated database.

Begin the process of restoring mailbox access by moving the mailboxes from the original database to the new database.  This is accomplished using Get-Mailbox and Set-Mailbox.

 

[PS] C:\>Get-Mailbox -Database DB1 | Set-Mailbox -Database NEW-DB1

Confirm
Rehoming mailbox "exchange.msft/LoadGen Objects/Users/MBX-1B/DB1/MBX-1B 0B63EF06-LGU000001" to database "NEW-DB1". This
operation will only modify the mailbox's Active Directory configuration. Be aware that the current mailbox content
will become inaccessible to the user.
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [?] Help (default is "Y"): a

 

After allowing sufficient time for Active Directory replication, users should be able to access their mailboxes. Transport services may need to be restarted to force re-categorization of messages to deliver to new servers.

 

Step 11:  (Optional):  Migrate storage associated with other database copies.

 

This step is optional if all storage for all database copies was migrated in step 8. In this step, complete the migration of storage from the original servers to the new servers that will house the database copies. It is important that all paths on the new servers match, so pay careful attention to how the disks are presented and how driver letters / mount points are assigned.

 

Step 12:  Add database copies of new databases to additional DAG nodes using migrated storage.

After storage has been completely migrated, the original databases should now be available on servers in the new DAG. Using Add-MailboxDatabaseCopy, you can re-instate passive copies of the database using the databases that were migrated from the original DAG. The Replication service will match these databases to the new log file stream and begin log file replay. If truncation and / or replay lag was previously configured, the copies may be added with the lag at this time.

 

[PS] C:\>Add-MailboxDatabaseCopy NEW-DB1 -MailboxServer MBX-2B
[PS] C:\>Add-MailboxDatabaseCopy NEW-DB1 -MailboxServer MBX-2C -ReplayLagTime 7.0:0:0

 

The success of these operations can be validated with Get-MailboxDatabaseCopyStatus.

 

[PS] C:\>Get-MailboxDatabaseCopyStatus NEW-DB1\*

Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                              Length    Length                             State
----                                          ------          --------- ----------- --------------------   ------------
NEW-DB1\MBX-2A                                Mounted         0         0                                  Healthy
NEW-DB1\MBX-2B                                Healthy         0         0           1/29/2014 8:59:19 PM   Crawling
NEW-DB1\MBX-2C                                Healthy         0         162         1/29/2014 8:59:19 PM   Crawling

 

Once this procedure has been successfully completed on all database copies, the original servers can be decommissioned, if necessary.

Microsoft researcher busts myths in new book, ‘It’s Complicated: The Social Lives of Networked Teens’

$
0
0

 In the book, “It’s Complicated: The Social Lives of Networked Teens,” released Feb. 25, Microsoft researcher and ethnographer danah boyd explores major myths about teens' use of social media. She finds they engage with each other and develop a sense of identity – despite parental obstacles and other challenges.

She takes on ideas and expressions about identity, privacy, safety, danger and bullying. She also finds that adults’ fear can hinder teenagers’ ability to become informed, thoughtful and engaged citizens through online interaction.

A press tour begins Tuesday in Cambridge, Mass., and also includes the nation’s capital, Seattle, Austin and San Francisco.

In a Q&A that kicks off that tour, boyd talks about how her book covers a decade of work, including an entirely new wave of fieldwork that is for an adult audience but on behalf of teens. She also reveals what surprised her in researching the book and why it isn’t a how-to guide.

Check out Microsoft Research to find all the full Q&A and what’s next on the horizon for boyd.

You might also be interested in:

Athima Chansanchai
Microsoft News Center Staff

Windows Azure – ANNUNCI!

$
0
0
Ormai è chiaro che Windows Azure è una piattaforma in continua evoluzione, e anche la scorsa settimana sono state annunciate nuove funzionalità . Qui di seguito vorrei riassumere tutte quelle novità che possono interessare in ambito IT PRO: 200 co-admin : come ben sapete, è possibile aggiungere diversi amministratori alla stessa sottoscrizione. In questo modo più persone hanno poteri su una stessa sottoscrizione, per un lavoro coordinato e centralizzato...(read more)

Microsoft y Nokia

$
0
0
Frank X. Shaw, Vicepresidente Corporativo de Comunicaciones en Microsoft, comenta los anuncios recientes en Mobile World Congrees (MWC) acerca del crecimiento de Windows Phone y como se vincula esto con la noticia de Nokia sobre el lanzamiento del smartphone Nokia X. Encuentren aquí el artículo de Frank X. Shaw sobre Microsoft y Nokia....(read more)

Xbox One is winner of ‘Product of the Year’ award in home entertainment category

$
0
0

clip_image001

Xbox One has been chosen as the winner of the 2014 “Product of the Year” award in the home entertainment category in the world’s largest consumer survey for product innovation.

Albert Penello, Microsoft director of product planning, spoke with Parade magazine about the win and how launching a new iteration of Xbox meant rethinking the device entirely to create a multipurpose entertainment hub for the whole family.

More than 45,000 people were surveyed; you can see the full list of winners in this AdWeekstory.

And to read more about the award, head over to Xbox Wire.

You might also be interested in:

· Wonder what the innovators do in their off time? They tinker. In The Garage.
· Coming March 11: Xbox One ‘Titanfall’ special edition bundle
· HP Pavilion x360 convertible PC is perfect for just about any scenario

Suzanne Choney
Microsoft News Center Staff

Update available for Brazil, Fiji and Jordan time zones

$
0
0

This time zone update for Windows Operating Systems has been published on Microsoft Download Center. It contains the following changes:

  • Jordan:
    Jordan announced last year that, on December 20, 2013, it would no longer use the daylight saving time (DST) schedule that it observed in 2012 and 2013. Beginning in 2014, the DST period in Jordan will being at 00:00:00.000 on Friday that follows the last Thursday of March, and it will end at 01:00:00.000 on the last Friday of October.

  • Brazil:
    Brazil made a time zone change effective November 10, 2013 that restores the state of Acre (capital, Rio Branco) to S.A. Pacific Standard Time (UTC-05:00). More specifically, this update sets the display name of the S.A. Pacific Standard time zone to "(UTC-05:00) Bogota, Lima, Quito, Rio Branco.".  

  • Fiji:
    Fiji announced that the country's DST schedule for this year ends at 02:00:00:000 on January 19.

For more information, or to download this update, see http://support.microsoft.com/kb/2922717.

Broadcast your awesome game action with Twitch, coming to Xbox One March 11

$
0
0

Twitch, the world’s largest video platform and community for gamers, is coming to Xbox One March 11. You’ll be able to use the Twitch app to broadcast your favorite games live simply by saying “Xbox, Broadcast.”

When you’re done broadcasting, you can use Twitch to watch games you love, chat with players you follow or even join a broadcaster’s game.

“Only Xbox One offers the most complete Twitch experience, with access to any live broadcast and the full Twitch catalog,” said Yusuf Mehdi, Xbox corporate vice president of marketing. “We’ve designed Xbox One to be the best place to play with features and unparalleled partnerships that deliver epic moments for gamers.”

To learn more, head over to Xbox Wire.

You might also be interested in:

· Wonder what the innovators do in their off time? They tinker. In The Garage.
· Coming March 11: Xbox One “Titanfall” special edition bundle
· HP Pavilion x360 convertible PC is perfect for just about any scenario

Suzanne Choney
Microsoft News Center Staff


Data loss prevention in Exchange just got better

$
0
0

With Exchange 2013, we released a new data loss prevention (DLP)capability based on deep content analysis that helps you identify, monitor, and protect sensitive information. We’re continually looking to expand our DLP capabilities, and today we’re bringing two new ones to you—Document Fingerprinting and Policy Tips in Outlook Web App (OWA). Both are being rolled out for Office 365 users right now, and they’ll be part of the Exchange Server 2013 SP1 release for our on-premises users (please stay tuned for more information on SP1).

Watch this short video that explains what DLP has to offer today and how the new capabilities can help your organization be more compliant.

 

Let’s look at these two new capabilities in more detail.

Document Fingerprinting

Document Fingerprinting enables you to match documents that are derived from the same template. This can be useful for organizations that frequently use standard forms or templates, for example:

  • A hospital that has an insurance form that patients fill in, each with a different insurance provider.
  • A tax processing office that uses several standard tax forms that it applies to a wide range of situations.
  • A law firm that uses a standard template to drafts patent applications that it files on behalf of its clients.

To understand how this works, let’s take a look at a scenario.

Contoso Pharma is a pharmaceutical company with a research division. Employees in the research division collaborate with their peers across the company to create new products and services, and file patents to protect their intellectual property. The law firm used by the company for patent filing uses a standard template for patent applications.

Say you’re an administrator at Contoso Pharma. You can use Document Fingerprinting to define a customized sensitive information type called “Patents.” To do so, you use the new administrative interface in the Exchange Admin Center (EAC) to create a new document fingerprint. Select the file you want to fingerprint, and then select the standard template that employees use for that file, in this case, patent applications.

image
You create document fingerprints in the EAC by selecting the file and then the sensitive information type.

This creates a template of that kind of document, which is used to detect other documents that are derived from the template.

image
When you fingerprint a document, a template of that kind of document is created (1) that is then used to detect documents (2) created with it.

Continuing with our Contoso Pharma scenario, once the patent template is fingerprinted, you (as an administrator) can use the existing Exchange Transport Rules and DLP infrastructure to create a rule to detect email with sensitive information of type “Patents” and define any of the supported actions in DLP. For example, you could block emails with patent documents attached from being sent externally. If Contoso Pharma uses an outside counsel for filing patents, you can allow users to send the email with an override option from Policy Tips! (See the introduction to Policy Tips in OWA section below.)

Although the scenario above refers to patents, you can easily imagine document fingerprinting being used to detect sensitive information in many other circumstances, like a hospital fingerprinting custom forms that contain personal health information, or a tax processing agency fingerprinting the 1040 EZ or W2 forms in the U.S.A.

Document fingerprinting is just one method organizations can use to detect sensitive content. Here’s a quick guide to the different methods of detecting sensitive content and the uses of each:

  • Detection of standardized entities supported by Microsoft out of the box. If you want to detect standard entities like credit cards, debit cards, and so on, check if this kind of detection is supported in the out-of-the-box list provided by Microsoft and, if it is, use it. You can customize this method of detection; learn how in this TechNet topic.
  • Developing Sensitive Information rule packages. If you need to detect entities that are specific to your organization (for example, an insurance number or an employee number),you can develop custom rules based on a combination of regular expressions and keywords. For details, see this TechNet topic.
  • Document Fingerprinting. If you have an established business practice in which you use standard forms or templates (for example, patent filings, health insurance forms, tax forms, and so on), you can use document fingerprinting to detect the completed forms. For details, see the Document Fingerprinting topic on TechNet.

Policy Tips in OWA

Policy tips are designed to notify users in your organization when they are sending sensitive information over email. Policy Tips are similar to MailTips, and you can use them in Outlook in several different ways to help your users avoid sending sensitive information in email. For example, you can use Policy Tips to:

  • Inform your users of the presence of sensitive information and block the email from being sent.
  • Educate your users through a Notify Policy Tip when sensitive content is present in their emails.
  • Empower your users to make case by case decisions by allowing them to override the sensitive information policy—with the option of including a business justification for the override.

With this new release in the Office 365 service and Exchange Server 2013 SP1, all the rich capabilities of Policy Tips previously available in Outlook 2013 will now also be available across the OWA interfaces.

As an administrator, you have the flexibility to set these different ways of using policy tips based on your business requirements. For example, you could set up a policy to only send a notification if an email contains one or two social security numbers, but block the mail from being sent and require a business justification if an email has a spreadsheet attached that contains more than 50 social security numbers.

DLP Policy Tips in OWA and OWA for Devices

As of Exchange 2013 SP1, OWA and OWA for Devices will both have Policy Tips support. The experience is in line with the experience in Outlook 2013, where users see a Policy Tip based on the DLP rules set by their administrator. If you already have a DLP policy with Policy Tips turned on, they will automatically apply in OWA. That means administrators do not need to create new configurations or turn switches on for this feature to work in OWA.

image
When you include sensitive information in an email message, a DLP Policy Tip notifies you before you send the message.

Policy Tips in OWA also show the user what sensitive content was detected in an email; in Outlook, it also allows the user to report back to the administrator if they think the email content is not sensitive. This empowers users that are closest to the data to provide feedback on the efficacy of detecting sensitive content.

image
A callout displays the sensitive content that was detected in the email and allows the user to report a mistake in the detection.

Administrators can also block emails that contain a large number of sensitive content (for example, an Excel attachment with more than 50 credit card numbers). Just as in Outlook 2013, the attachment with the sensitive content is highlighted so the user can easily identify it. Depending on the organization’s policy, users can be empowered to override the policy—with the option of requiring them to include a business justification—and send the email.

image
Policy Tips can be set up to block users from sending a large amount of sensitive content; attachments with sensitive content are highlighted for the user.

These experiences are also replicated on all mobile devices, from Windows Phone to iOS and Android devices.

image
Policy Tips can be set up on the Mobile devices to educate users

image
Different kinds of Policy Tips can be triggered based on conditions, like the amount of sensitive data in an email. No new configuration is required to do this, if DLP in Outlook has already been enabled.

Shared configuration

Policy Tips in Outlook 2013 are turned on when an administrator creates a rule with a action to notify the end user with a policy tip, or configures a DLP policy with such a rule. In Exchange 2013 these rules are applied on both the server and the client (that is, Outlook 2013). These same rules now get applied in exactly the same manner in OWA as well. This means that Exchange, Outlook, and OWA all share these configurations:

  • Exchange Transport Rules. When you configure a transport rule to look for sensitive content in an email, and take Policy Tip-based actions on them, they are uniformly applied in Outlook and OWA. This includes the predicates likes detecting sender group membership, and other properties of recipients as well. Specifically, the NotifySender action in Exchange Transport Rules triggers Policy Tips for both Outlook and OWA.

image
Different kinds of Policy Tips options can be triggered based on conditions, such as the one above where we notify the user about sensitive information but allow them to send the message.

  • Definitions of sensitive content. When you configure a transport rule to look for any of the out-of-the-box sensitive content or any custom sensitive types, they are evaluated in exactly the same manner in all three places, Exchange, Outlook, and OWA. Specifically, the MessageContainsDataClassifications predicate in Exchange Transport Rules defines the sensitive content for both Outlook and OWA.

image
You can defining sensitive content for Outlook and OWA when you configure a transport rule.

  • Policy Tip configurations. When you edit the Policy Tip configurations to customize the text displayed in a Policy Tip, or the Compliance URL, they are updated in Outlook 2013, OWA, and OWA for Devices.

image
Custom Policy Tip configurations, like compliance URLs or customized notification texts, are applied uniformly in Outlook, OWA, and OWA for Devices.

DLP planning and implementation are crucial for most organizations, because they impact the organization as a whole and all the users who work with sensitive data. Protecting sensitive information without decreasing users’ productivity is a key principle of DLP in Office 365, and Policy Tips and Document Fingerprinting can help you do just that. We hope adding these capabilities to our DLP arsenal will help make DLP management easier for your users. Stay tuned for more news about our work in compliance.

Shobhit Sahay

Announcing the Enhanced Mitigation Experience Toolkit (EMET) 5.0 Technical Preview

$
0
0

I’m here at the Moscone Center, San Francisco, California, attending the annual RSA Conference USA 2014. There’s a great crowd here and many valuable discussions. Our Microsoft Security Response Center (MSRC) engineering teams have been working hard on the next version of EMET, which helps customers increase the effort attackers must make to compromise a computer system.

I’m happy to announce the public release of the EMET 5.0 Technical Preview today from the RSA exhibit hall.

During last night’s RSA reception, conference attendees got a sneak preview of EMET 5.0 as demonstrated by Jonathan Ness, Chengyun Chu, Elia Florio and Elias Bachaalany from our EMET engineering team. If you missed it, we’ll have our EMET engineering team here all week at RSA demonstrating the current version of EMET 4.1, as well as the EMET 5.0 Technical Preview, at the Microsoft Booth (number 3005).

EMET anticipates the most common actions and techniques adversaries might use in compromising a computer, and can help protect the computer by diverting, terminating, blocking and invalidating those actions and techniques. In recent 0-days, EMET has been an effective mitigation against memory corruption. Having EMET installed and configured on computers meant that the computers were protected from those attacks.

EMET 5.0 Technical Preview adds new protections for enterprises on top of the 12 built-in security migrations included in version 4.1. For instance, the new Attack Surface Reduction mitigation allows enterprises to better protect third-party and custom-built applications by selectively enabling Java, Adobe Flash Player and Microsoft or third-party plug-ins. At the Security Research and Defense blog, our engineering team provides a deep dive blog post on EMET 5.0 Technical Preview.

Since the first release of EMET in 2009, our customers and the security community have adopted EMET and provided us with valuable feedback. Your feedback both in forums and through Microsoft Premier Support Services, which provides enterprise support for EMET, has helped shape the new EMET capabilities to further expand the range of scenarios it addresses.

The same goes for EMET 5.0 Technical Preview. As we march towards the final release of EMET 5.0, we would like to invite you to download the EMET 5.0 Technical Preview at microsoft.com/emet to deploy in your test environments. Your feedback is valuable in shaping our roadmap. Please let us know what you think.

Finally, if you’re at the RSA Conference, please stop by our booth and share your feedback with Jonathan, Chengyun, Elia and Elias. We’d like to hear from you!

Thanks,
Chris Betz
Senior Director
Microsoft Security Response Center (MSRC)

Conundrums in cyberspace — exploiting security in the name of, well, security

$
0
0

 Posted by Scott Charney
Corporate Vice President, Trustworthy Computing, Microsoft

At Microsoft, establishing and sustaining trust with our customers is essential. If our customers can’t rely on us to protect their data—whether from crooks, mismanagement or excessive government intrusion—they will look elsewhere for a technology provider.

Government access to data is a hot topic. But it’s not new. In fact, our General Counsel, Brad Smith, has addressed the issue in a series of blog posts covering, among other topics, our efforts to protect customers and our support for reforming government surveillance.

On Tuesday at the RSA Security Conference in San Francisco, I gave a speech on the changing cybersecurity landscape and the respective roles of governments, users and the IT industry. I’d like to share some of my thoughts here.

[Read more...]

...(read more)

Announcing EMET 5.0 Technical Preview

$
0
0

Today, we are thrilled to announce a preview release of the next version of the Enhanced Mitigation Experience Toolkit, better known as EMET. You can download EMET 5.0 Technical Preview here. This Technical Preview introduces new features and enhancements that we expect to be key components of the final EMET 5.0 release. We are releasing this technical preview to gather customer feedback about the new features and enhancements. Your feedback will affect the final EMET 5.0 technical implementation. We encourage you to download this Technical Preview, try it out in a test environment, and let us know how you would like these features and enhancements to show up in the final version. If you are in San Francisco, California, for the RSA Conference USA 2014, please join us at the Microsoft booth (number 3005) for a demo of EMET 5.0 Technical Preview and give us feedback directly in person.  Several members of the EMET team will be demonstrating at the Microsoft booth for the entire Conference.

As mentioned, this Technical Preview release implements new features to disrupt and block the attacks that we have detected and analyzed over the past several months. The techniques used in these attacks have inspired us with new mitigation ideas to disrupt exploitation and raise the cost to write reliable exploits. The EMET 5.0 Technical Preview also implements additional defensive mechanisms to reduce exposure from attacks.

The two new features introduced in EMET 5.0 Technical Preview are the Attack Surface Reduction (ASR) and the Export Address Table Filtering Plus (EAF+). Similar to what we have done with EMET 3.5 Technical Preview, where we introduced a new set of mitigations to counter Return Oriented Programming (ROP), we are introducing these two new mitigations and ask for your feedback on how they can be improved. Of course, they are a “work in progress.” Our goal is to have them polished for the final version of EMET 5.0.

Let’s see in detail what these two new mitigations do, and the reasoning that led us to their implementation.

Attack Surface Reduction

In mid-2013, we published a Fix it solution to disable the Oracle Java plug-in in Internet Explorer. We received a lot of positive feedback and a number of suggestions on how we could improve the Fix it. The most recurring suggestion we received was to allow the Oracle Java plug-in on intranet websites, which commonly run Line-of-Business applications written in Java, while blocking it on Internet Zone websites. In addition to that Java-related customer feedback, we have also seen a number of exploits targeting the Adobe Flash Player plug-in. For example, the RSA breach was enabled by an Adobe Flash Player exploit embedded inside a Microsoft Excel file and a number of targeted attacks have been carried out by Adobe Flash Player exploits embedded in Microsoft Word documents, as described by Citizen Lab. We decided to design a new feature that can be used to mitigate similar situations and to help to reduce the attack surface of applications. We call this feature Attack Surface Reduction (ASR), and it can be used as a mechanism to block the usage of a specific modules or plug-ins within an application. For example, you can configure EMET to prevent Microsoft Word from loading the Adobe Flash Player plug-in, or, with the support of security zones, you can use EMET to prevent Internet Explorer from loading the Java plug-in on an Internet Zone website while continuing to allow Java on Intranet Zone websites.

The example below shows ASR in action, preventing Microsoft Word from launching an Adobe Flash Player file embedded in the document. By default, EMET 5.0 Technical Preview comes pre-configured to block certain plug-ins from being loaded by Internet Explorer, Microsoft Word and Microsoft Excel. The feature is fully configurable by changing two registry keys that list the names of the plug-ins to block, and, if supported, the security zones that allow exceptions. For more details on how to configure ASR please refer to the EMET 5.0 Technical Preview user guide.

EAF+

We also added new capabilities to the existing Export Address Table Filtering (EAF). EAF+ consolidates protection of lower-level modules and prevents certain exploitation techniques used to build dynamic ROP gadgets in memory from export tables. EAF+ can be enabled through the “Mitigation Settings” ribbon. When EAF+ is enabled, it will add the following additional safeguards over-and-above the existing EAF checks:

  • Add protection for KERNELBASE exports in addition to the existing NTDLL.DLL and KERNEL32.DLL

  • Perform additional integrity checks on stack registers and stack limits when export tables are read from certain lower-level modules

  • Prevent memory read operations on protected export tables when they originate from suspicious modules that may reveal memory corruption bugs used as “read primitives” for memory probing

For example, the third protection mechanism in the list above mitigates the exploitation technique developed in Adobe Flash Player used in some recent Internet Explorer exploits (CVE-2013-3163 and CVE-2014-0322), where the attacker attempted to build ROP gadgets by scanning the memory and parsing DLL exports using ActionScript code. Exploits for these vulnerabilities are already blocked by other EMET mitigations. EAF+ provides another way to disrupt and defeat advanced attacks. The screenshot below shows the exploit for CVE-2014-0322 in action on Internet Explorer protected by EMET 5.0 Technical Preview with only EAF+ enabled.

Other improvements

This Technical Preview enables the “Deep Hooks” mitigation setting. We have been working with third-party software vendors whose products do not run properly with Deep Hooks enabled. We believe these vendors have resolved the application compatibility issues that previously existed with Deep Hooks enabled. We enable Deep Hooks in the Technical Preview to evaluate the possibility of having this setting turned on by default in the final EMET 5.0 release because it has proven to be effective against certain advanced exploits using ROP gadgets with lower level APIs. We have also introduced some additional hardening to protect EMET’s configuration when loaded in memory, and fixed several application compatibility issues including a common one that involves Adobe Reader and the “MemProt” mitigation.

Acknowledgments

We’d like to thank Spencer J. McIntyre from SecureState, Jared DeMott from Bromium Labs, along with Peleus Uhley and Ashutosh Mehra from the Adobe Security team for their collaboration on the EMET 5.0 Technical Preview.

We are excited for this Technical Preview and we hope that the additions are as valuable for our customers as they are for us. We invite you to install and give EMET 5.0 Technical Preview a try; we look forward to hearing your feedback and suggestions on how to enhance the new features that we have introduced. We would also welcome any suggestions for additional new features you’d like to see included in the final version of EMET 5.0. We greatly value the feedback we receive, and we want to build a product that not only provides additional protection to systems but is also easy to use and configure. We then invite you all to download EMET 5.0 Technical Preview and drop us a line!

  • The EMET Team

Exchange Server 2013 SP1

$
0
0

A Microsoft letöltő központban elérhető az Exchange Server 2013 SP1.

A legfontosabb újdonságok a csomagban:

Hivatalos bejelentés később érkezik.

A frissítés menete csak úgy mint korábban, a setup /m:upgrade paranccsal végezhető el. Ahogy az összes CU, úgy természetesen az SP1 is bővíti a Schema-t.

Viewing all 17778 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>