Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Resources for IT Professionals

older | 1 | .... | 834 | 835 | (Page 836) | 837 | 838 | .... | 889 | newer

    0 0

    このポストは、1 月 27 日に投稿された Cloud innovation for the year ahead: From infrastructure to innovation の翻訳です。 「企業がクラウド移行の転換点に達するのはいつだろうか」という議論は、何年にもわたって交わされてきました。そしてついに今年、そのときが訪れたようです。クラウドの導入率は全世界で急激に伸びており、地域別の調査でもこれが裏付けられています。たとえば、Cloud Industry Forum などの調査によると、クラウド導入率は英国国内だけでも 5 年前の 48% から現在は 84% へと大幅に向上しています。しかしその一方、多くの企業で課題が残っていることも確かです。 North Bridge and Wikibon (英語) が実施した調査によると、アンケートに回答した約 1,000 社のうち、 45% がクラウド導入の最大の懸念事項として現在でもセキュリティを挙げており、次いで法規制/コンプライアンス (36%)、プライバシー (29%)、クラウド ベンダー ロックイン (26%) という回答が得られています...(read more)

    0 0

    こんにちは、Windows プラットフォームサポートの馬場です。

    今回は、Windows Server 2008 よりドメイン コントローラーが UDP 138 を利用したリクエストに応答する動作の変更点とそれに起因して発生する問題についてご紹介します。

     

    Windows Server 2003 のドメインコントローラーから Windows Server 2008 以降のドメインコントローラーに移行した後、NetBIOS 名のみを利用してドメイン名を解決するように構成している環境において、移行前は正常に行えていたとしても、以下のような問題が発生する可能性がございます。

     

    * Active Directory 環境では DNS による名前解決ができることを前提としており、 NetBIOS 名のみを利用している場合には、グループポリシーの適用がおこなえないなどの問題の他、予期しない問題が生じる可能性がある点についてはご留意ください。

     

    - ドメインコントローラーへのログオンに失敗する

    - 共有フォルダにアクセスできない

    - リモートサーバーに接続できない

     

                                           など

     

    こちらの問題が発生した際、以下のようなエラーメッセージが表示されます。

     

    "現在、ログオン要求できるログオンサーバーはありません"

     

    このように、認証要求先となるドメインコントローラーを見つけられないことを示すエラーメッセージがでてきます。

    ※注意: 事象により表示されるエラーメッセージは、異なることがございます。

     

     

    現在、直面している問題がこちらのシナリオに当てはまる場合、Windows Server 2008 にて行われた動作変更の影響が考えられます。

    NetBIOS でドメインログオンのための名前解決を実施した場合、名前解決の結果得られたドメインコントローラーに対して UDP 138 を利用した疎通確認を行います。

    その疎通確認に対して応答が得られたことをもって、ドメインコントローラーの検索処理が完了します。

     

    この UDP 138 を利用した疎通確認のリクエストに対しての処理が Windows Server 2003 以前とそれ以降では以下のように異なります。

     

    - Windows Server 2003 R2 以前

    UDP 138 を利用したリクエストを受け取るとそのまま応答を返します。

     

    - Windows Server 2008 以降

    UDP 138 を利用したリクエストを受け取ると、そのリクエスト元のコンピューター名の解決を NetBIOS で行います。

    その解決ができた場合に応答を返し、名前解決が出来ない場合には応答を返しません。

     

    以上の動作の違いにより、ドメインコントローラーがリクエスト元のコンピューター名をNetBIOS を利用して解決ができない場合に問題が発生します。

     

    この問題に対しての解決方法は 2 つあります

     

    方法1: クライアントが DNS でドメインコントローラーの名前解決ができるように設定します (強く推奨します)

    ドメインコントローラーの SRV レコードをもつ DNS サーバーを確認し、その DNS サーバーがクライアントの参照先 DNS サーバーとなるように設定します。

     

     

    方法2: ドメインコントローラーが NetBIOS でクライアントの名前解決ができるように設定します

     

    方法 2 については、多数のクライアントが存在する場合には、 NetBIOS 名解決ができるように WINS サーバーを利用していただくことが必要ですが、

    この問題に合致していないかの確認や、暫定的な対処としては、ドメイン コントローラーに LMHOSTS を設定する方法が有効です。

     

    LMHOSTS の設定方法は次のとおりです。

     

    ===========================

    LMHOSTS 設定方法

    ===========================

     

    1. ドメイン コントローラーに管理者権限をもつユーザーでログオンします。

     

    2. [コントロールパネル] [フォルダオプション] を開き、 [表示] タブにある"登録されている拡張子は表示しない" のチェックボックスを外し、ファイルの拡張子が表示されるようにします。

     

    3. %systemroot%\system32\drivers\etc フォルダを開きます。

     

    4. lmhosts ファイル (拡張子無し) があれば、そのファイルを開きます。 lmhosts ファイルが無く、 lmhosts.sam があれば、拡張子を削除し lmhosts とした上で、 notepad で開きます。

     

    5. 次のようなエントリを追加し、保存します。

     

    <クライアント IP アドレス> <クライアント>  #PRE

     

    6. コマンドプロンプトを開き、 nbtstat -R を実行し、 nbtstat -c を実行します。

     

    7. 追加したエントリが表示されるか確認します。

     

     

    - 参考情報

    [タイトル] How Domain Controllers respond to LDAP Ping on UDP 138 port

    [URL] https://support.microsoft.com/en-us/kb/3088277

     

     


    0 0

    Die #CeBIT2016 steht dieses Jahr erneut unter dem Motto d!conomy, also der digitalen Transformation, die Effizienz fördert und Offenheit fordert. Auch Microsoft führt das Motto der #Cebit2015 fort und zeigt, wo ein #digitaleswirtschaftswunder schon stattfindet. Mit ausgewählten Partnern werden konkrete Szenarien präsentiert, die erlebbar machen wie Mobilität, Flexibilität und Produktivität schon heute unser Morgen gestaltet.

    Mit der Aktion „sieben Bilder in sieben Wochen“ gibt es vor der CeBIT einen exklusiven Blick hinter die Kulissen des diesjährigen Microsoft Auftritts. Die Zahl sieben leitet sich von den sieben Faktoren ab, die Microsoft für die Umsetzung des Digitalen Wirtschaftswunders identifiziert hat.

    „7 Bilder in 7 Wochen“ greift die Anzahl der Faktoren auf und thematisiert den Aufbau der Microsoft Präsenz: Vom Logo über das Standkonzept bis hin zu konkreten Partnerauftritten wird das Konzept skizziert. Im wöchentlichen Rhythmus wird über sieben Wochen immer am Freitag ein neues Bild veröffentlicht. Die Bilder werden in Microsoft-Kachel-Manier präsentiert und zeigen schrittweise, was auf der diesjährigen CeBIT in Halle 4 am Stand C31 geboten wird. Ich will Euch jetzt aber nicht weiter auf die Folter spannen, hier ist das erste Bild:


    Informationen rund um den Auftritt von Microsoft auf der Cebit gibt es auf der offiziellen Seite hier. Presseinformationen findet ihr im Microsoft Newsroom. Und auch im Social Web halten wir Euch vor und während der Cebit insbesondere über folgende Kanäle

    und unter dem Hashtag #Digitaleswirtschaftswunder auf dem Laufenden. Seid also dabei – live vor Ort oder digital.

    #digitaleswirtschaftswunder

     

     

     

     

    Ein Beitrag von Tim Lenke (@Timophilus
    Communications Intern bei Microsoft Deutschland

    - - -

    Autoren-Information

    Tim beschäftigt sich als Communications Intern bei MicrosoftDE mit der digitalen Gesellschaft, vor allem in Zusammenhang mit der CeBIT. Bevor er zu Microsoft kam, war er mit seinem Studium in Hamburg, Istanbul und Kuala Lumpur beschäftigt. Unter @Timophilus gibt er seine Meinung zu Aktuellem, Digitalem und Filmen zum Besten.

     

     

     


    0 0

    clip_image002The shell of an operating system is another name for its user interface (whether graphical or textual) – Microsoft Windows being one such shell, provides common UI elements like the taskbar, window controls etc (and a longish list of dead or obsolete elements, like charms).

    Most Windows users will stick with the default shell, but alternatives do exist if you want to be individual and create more work for yourself. Desktop Linux users have a cornucopia of GUI and CLI shells to choose from, with names like Gnome, KDE, bash, tcsh, sh, tosh and bosh. Actually, the last two aren’t real. Probably.

    Hardcore power users tend to eschew the namby-pamby niceties of a gooey WIMP system and prefer keyboard shortcuts to everything, but for normal people, there are some quick ways of jumping into parts of the UI using the shell, so you can shave a few tenths off common activities and yet still relax in the modern, graphical world.

    It’s been possible for years to short-circuit sections of the Windows UI to make troubleshooting quicker – all of these would run from Start -> Run in previous versions, or just press WindowsKey+R in Windows 10. You can get to the old-fashioned Control Panel applets, for example, if you know the .cpl extension to activate them. If you don’t, you could try running %systemroot%\system32 to take you to the Windows System folder, then clip_image004filter by type to show only the Control panel items. Perhaps the most useful for troubleshooters is ncpa.cpl(which you can just run directly from WinKey+R), to take you to the Network Settings, without lots of right-clicking and faffing about.

    There are a host of other handy shortcuts, from system environment variables (you can see the full list from a cmd prompt by just typing set, and use/reference them by strapping %’s to either end), to lots of relatively obscure shell commands which jump straight to otherwise hidden or deeply-nested bits of the OS. You can just run these commands as above, or if you want to create a shortcut, set it to explorer shell:command

    Here are a few to try:

    • shell:accountpictures– could be useful if you’re putting your existing profile picture into a website or some such, though the pictures don’t get exposed here as PNG or JPG so YMMV.
    • shell:desktop– jumps straight to your desktop; handy if you use that as a dumping zone for docs etc
    • shell:downloads – jumps straight to your downloads folder
    • shell:onedrivecameraroll– especially useful when fishing out pics from your phone
    • shell:my pictures or shell:pictureslibrary– takes you to different places you might have photos stashed

    Most of these have been around for a while, but may be comparatively unknown. For a supposedly-fulsome list, check here.


    0 0

    HowTo: Puppet on-premises to deploy resources into Azure IaaS infrastructure.

    Terms:
       Puppet - is an Open Source configuration management system that allows you to define the state of
                your IT infrastructure, then automatically enforces the correct state.
                Alternatives you may know of are Chef, or perhaps Windows PowerShell DSC.

       Tennant ID - the unique identity guid that matches to your company name that owns your Azure environment.
       Subscription ID - the guid for the sub-area of purchased functionality.
                         This might be a department or a group or some other logical unit within the Tennant,
                          with its own billing and users.

       ARM - ("Azure Resource Manager") is the way to do things in Azure, and assign
             Resources to groups and combine resources to create things, report, etc.
             You assign permissions to resources, etc. This is what we will use.

       ASM - the original Azure "service" based model (and matching API's) which can be thought of as V1
             and should no longer be used. We decided NOT to be using/creating V1 anything.
             Often the documentation refers to this as "Classic", or even RDFE (Red Dog Front-End).

       IaaS - 'Infrastructure as a Service' is the platform within Azure to create a "virtual datacentre",
              with virtual networks, virtual machines, virtual NICs, virtual load-balancers, etc.
              You still run/control/manage absolutely all things in an IaaS environment.

    Actions to be done: (noted down from memory - we did this a few days ago)

    0. refer to this site for all the information on the Azure module for Puppet.
        See: https://github.com/puppetlabs/puppetlabs-azure
        Note - do not forget to run the GEM step when installing things.
               (yes, the site I was using had missed this and we got plenty of weird errors)
       To install the module:
        a) puppet module install puppetlabs-azure
        b) gem install azure
                  azure_mgmt_compute azure_mgmt_storage azure_mgmt_resources azure_mgmt_network
                  hocon retries --no-ri --no-rdoc

       This is what Puppet uses to translate the template PP's into Azure-speak and interface to Azure.

    1. To connect/configure/query  Azure, you will want to install "Azure CLI", 
        onto your Linux/Windows/OS-X machine that is hosting the Puppet server.
        See: https://azure.microsoft.com/en-gb/documentation/articles/xplat-cli-install/

    2. using the Azure CLI, (the command line interface to Azure) you need to login to the target subscription.
       Obviously, we need to have appropriate rights within the Subscription to create stuff.
       If all is well, you can login and confirm functionality and connectivity.
       NOTE: when you login from the CLI, it requests that you open a web page and enter the provided code.
             Once completed, you can close the browser window.
       cli: azure config mode arm
       cli: azure login

    3. to confirm you are in the right subscription, tenant, etc., issue a query to azure.
        cli: azure account show
       cut/paste the tenant and subscription GUIDs into somewhere handy - we'll need these later.

    4. using Azure CLI and a simple JSON template for Azure,
       create some ARM resource group, network, storage,VM etc.
       see: https://github.com/Azure/azure-quickstart-templates
       If that all worked, then all is good for connectivity, permissions, etc. and we are ready to turn to Puppet.
       cli: azure group deployment create --template-file <template.json> --parameters-file <parameters.json> <resource-group> <deployment-name>
       cli: azure group log show -l <resource-group>

    5. Puppet is not going to login as you, nor get prompted for your credentials or MFA. So instead, we need
       an Azure equivalent of a Service Account. This is known as a Service Principal.

    6. You create the Service Principal in the Azure Directory (AAD) by creating a "dummy app" and it generates
       the related "Service Principal". If you do not have permissions/ownership of the Azure directory (we didn't) , this will
       will have to be done by someone who does. We had to get the Top Guy to login to the Classic UI interface to do this.

        See: https://azure.microsoft.com/en-us/documentation/articles/resource-group-authenticate-service-principal/ 
        See: https://azure.microsoft.com/en-us/documentation/articles/resource-group-create-service-principal-portal/#create-application

       When the Service Principal is created, it will show the "client secret" which we need to copy for Puppet.
       Don't forget to grab it - cut/paste for later.

    7. This "Service Principal" will need to be granted "Contributor" rights to the target Subscription,
       so that it/Puppet can create things within our Subscription.  RBAC (role based access control) will be used to set it.

       NOTE: curiously, to assign this role, we need to find out/refer to the Object-ID of the Service Principal,
                   not the "client id" that it reports upon creation in the UI. (both are GUIDs)
                   We were able to get this property by querying the AAD for all Service Principals and finding our one.

       cli:  azure ad sp list
       cli:  azure role assignment create --objId  <applications's object id> --role <name of role> --scope <subscription/subscription id>

       See: https://azure.microsoft.com/en-us/documentation/articles/role-based-access-control-manage-access-azure-cli/

    8. If all worked, we should be able to login with the Service Principal to the subscription.
       cli: azure login --service-principal -u client-id -p secret --tenant tenant-guid

    9. another thing Puppet will ask for is the ASM (ie "Classic") certificate guid.pem file to authenticate into ASM.
       (even though we don't want to use ASM, Puppet still needs this)
       cli: azure config mode asm
       cli: azure account cert export
       cli: azure config mode arm

    10. Once we have all the information, we can create the Azure.conf file for Puppet, to tell it how to connect.
        the file will look like this:

        azure: {
         tenant_id: 'your-tenant-id'
         subscription_id: "your-subscription-id"
         management_certificate: "\path\to\file.pem"
         client_id: 'your-client-id'
         client_secret: 'your-client-secret'
        }

    11. We should now have all the infrastructure within Puppet ready to go.
        Create a TEST.pp file with some simple configuration to create and launch from Puppet.
        e.g. the file might simply have

           azure_vm { 'TestUbuntu':
             ensure         => present,
             location       => 'australiaeast',
             image          => 'canonical:ubuntuserver:14.04.2-LTS:latest',
             user           => 'azureuser',
             password       => 'P@ssw0rd!',
             size           => 'Standard_A0',
             resource_group => 'MyTestRG',
           }

        $ puppet apply TEST.pp --debug

         Within the Azure portal, go to the target Resource Group and you should see things appear.
         We had all sorts of uninformative errors from Puppet due to some clashes. I wasted plenty of
         time trying to understand what was wrong in Puppet or Azure or whatever. Make sure you
         are testing using a completely empty Resource Group!


    0 0

     If SQL Server is something you talk to your customers about, then please ensure April 12th 2016 is marked on your and their calendars. This is when support for SQL Server 2005 ends, and conversations about upgrading to SQL Server 2016 should start (if they haven’t already!). Help your customers plan, budget and execute a smooth upgrade. Learn more about the upgrade opportunity and resources available to you. Register for the webinar on Thursday February 25th at 12 noon.

    ...(read more)

    0 0

    Microsofts digitale persönliche Assistentin Cortana hat im Januar 2016 ein Content-Upgrade erhalten, darunter viele neue gesprochene Sätze, Witze, Gedichte und Lieder – nun ja, leider kann Cortana noch nicht wirklich singen, aber sie gibt sich Mühe. Und das hat einen Grund: Am 8. Februar 2016 findet die „fünfte Jahreszeit“ ihren Höhepunkt im Rosenmontag, der in Karnevalsstädten zwischen Münsterland und Bodensee allerorten gefeiert wird.



    Von „Helau!“ bis zur Büttenrede: Cortana kennt sich aus

    Im heiteren Karnevals-Gewusel, ob in Köln oder Konstanz, kann leicht mal der Überblick verloren gehen. Gut, wenn auf dem Lumia Smartphone mit Windows 10 eine persönliche Assistentin direkt zur Hilfe ist. So antwortet sie regionstypisch auf „Alaaf!“ und „Helau!“, aber auch bei den klassischen Karnevalsaktivitäten ist Cortana ganz vorne mit dabei. Den Karnevals-Tusch („Tä-tä! Tä-tä!“) hat sie schon eingeübt, und vier verschiedene Büttenreden (nun ja, sie sind etwas kurz, aber aller Anfang ist schwer) sind neu in ihrem Repertoire.


    Inspirationen von Minions und Magiern

    Darüber hinaus kann die Microsoft-Assistentin auch zu weiteren Kostümen inspirieren, indem sie berühmte fiktionale Charaktere oder echte Menschen zitiert: „Nicht versuchen. Tu es oder tu es nicht.“ Wer hat’s gesagt? Natürlich Jedi-Meister Yoda aus „Krieg der Sterne“. Einen Minion („Bananaaaaaaaa!“) kann Cortana ebenfalls imitieren. Sie hat auch eine Ahnung, in welchem Haus sie landen würde, wenn sie die Hogwarts-Schule für Hexerei und Zauberei aus den „Harry Potter“ Geschichten von J.K. Rowling besuchen würde. Welches es ist, bleibt an dieser Stelle aber geheim. Und auf die Frage, was „Leben“ ist, zitiert Cortana ab sofort ganz wortgewandt den Beatles-Frontmann John Lennon: „Leben ist das, was passiert, während du damit beschäftigt bist, andere Pläne zu machen“.


    Nach dem Karneval ist vor der CeBIT

    Auch für die auf Karneval folgenden Events ist Cortana schon gebrieft. Ihr persönliches Highlight in Deutschland ist natürlich die CeBIT 2016 in Hannover. Auf die Frage, wann diese stattfindet, weiß die kluge Computerstimme natürlich eine Antwort: „Die CeBIT 2016 findet vom 14. bis 18. März auf dem Messegelände in Hannover statt“.

    Cortana kommt auf allen Windows 10-Devices zum Einsatz. Mehr Informationen zu Cortana finden sie hier

     

     

     

     

     

    Ein Beitrag von Irene Nadler (@irenenadler)
    Communications Manager Devices und Services bei Microsoft Deutschland

    - - - -

    Über die Autorin


    Irene Nadler arbeitet bei Microsoft im Bereich Presse und Öffentlichkeitsarbeit und betreut die Themen Windows, Surface und Windows Phone. Mit Windows ist sie schon seit Windows 95 gut bekannt. In ihrer Freizeit stehen Kultur und Fußball ganz oben auf der Liste.


    0 0

    Das deutsche Webzine TecChannel.de hat in einem lesenswerten Text die gefährlichste Malware des vergangenen Jahres vorgestellt. Die Autorin Christine Schönig beschreibt darin die elf gefährlichsten Schädlinge 2015. Dabei lassen sich einige Trends ausmachen:

    • Ransomware ist weiter auf dem Vormarsch und gefährliche denn je. Programme wie CTB-Locker verschlüsseln die Daten der betroffenen Personen und Unternehmen und geben sie erst nach Zahlung eines Lösegelds in Höhe von einigen Hundert Euro wieder frei. Die Infektion erfolgt per Social Engineering: Die Opfer erhalten eine E-Mail, die sie dazu auffordert, ein angehängtes ZIP- oder CAB-Archiv zu öffnen und die enthaltene Software zu installieren. Absender ist scheinbar ein Firmenangehöriger. CTB-Locker nutzt moderne Verschlüsselungs-Algorithmen, die mit normalem Aufwand nicht zu knacken sind.

    Firmen sollten ihre Mitarbeiter vor solchen E-Mails warnen und sie entsprechend schulen, private Anwender sollten wachsam sein. Aktuelle Ransomware wie CTB-Locker wird von vielen Antiviren-Programmen nicht entdeckt, Schutz bieten nur regelmäßige Backups.

    • Von Exploit Kits geht nach wie vor eine hohe Gefahr aus. Sie nutzen Schwachstellen auf Websites aus, um sich dort einzunisten und den Besuchern Malware unterzuschieben. Moderne Varianten wie das Angler Exploit Kit wechseln häufig ihre Landing Page, um von Intrusion-Protection-Systemen (IPS) nicht entdeckt zu werden. Abhilfe schaffen hier nur regelmäßige Updates aller installierten Softwarekomponenten – denn wo keine Schwachstelle, da kein Exploit.
    • Die am weitesten verbreitete Malware sind vermutlich nach wie vor die Trojaner. Sie laden weitere Schadsoftware nach, welche beispielsweise Tastatur-Eingaben mitliest, als Ransomware agiert oder vertrauliche Daten weitergibt. Vertreter wie AAEH/Beebone setzen Antiviren-Programme außer Kraft, indem sie ihre Verbindung zu den Servern der Hersteller blockieren, oder können sich wie Sality selbstständig über USB-Datenträger und Netzwerk ausbreiten.
    • Die Zahl der Schadprogramme für mobile Geräte wie Smartphones wächst weiter stark an. Unter der Bezeichnung Certifi-gate wurde im vergangenen August eine Reihe von Schwachstellen in Remote Support Tools für Android bekannt, über die sich Kriminelle uneingeschränkten Zugriff auf die Geräte verschaffen konnten. Anwender sollten Apps für den Fernsupport daher genau anschauen, bevor sie sie installieren.

    Zudem wurden vereinzelt Fälle bekannt, in denen es gelang, Malware in den geschützten Software-Verzeichnissen von Google und Apple zu platzieren. So fand sich im Play Store eine App namens BrainTest, angeblich ein Intelligenzspiel, die ein Rootkit auf den Devices einrichtete, über das beliebiger Code ausgeführt werden konnte. Die Besitzer von iOS-Geräten von Apple hingegen wurden durch eine kompromittierte Version der Entwicklerplattform XCode bedroht, welche wiederum die mit ihr programmierten Apps infizierte.

    • Auch Botnetze bilden nach wie vor eine große Bedrohung, auch wenn es den Strafverfolgungsbehörden – unter anderem durch Unterstützung von Microsoft – in den vergangenen Jahren gelang, mehrerer dieser Netzwerke abzuschalten. Eines der Botnetze, das sich am weitesten ausgebreitet hat, nennt sich Simda und hat weltweit bereits rund 770.000 Computer infiziert. Die Kriminellen haben dabei das Ziel, die betroffenen PCs unter ihre Kontrolle zu bekommen. Die verwendeten Zugänge werden ständig gewechselt, so dass der Befall für Antiviren-Software schwer zu entdecken ist.

    Ganz gleich, um welche Malware es sich handelt, die wichtigsten Schutzmaßnahmen sind immer dieselben: Wachsamkeit bei empfangenen E-Mails, sofortiges Einspielen von Patches und Updates, und, bei Unternehmen, die Definition eines mehrstufigen Sicherheitskonzepts.

    Gastbeitrag von Michael Kranawetter, National Security Officer (NSO) bei Microsoft in Deutschland. In seinem eigenen Blog veröffentlicht Michael alles Wissenswerte rund um Schwachstellen in Microsoft-Produkten und die veröffentlichten Softwareupdates.


    0 0

    Sigcheck v2.5
    В этом релизе инструмента командной строки, позволяющего получать версию файла, код подписи, информацию о сертификате, теперь имеет опцию, которая будет сообщать вам о любых сертификатах установленных в системе не имеющих связанных цепочек с одним из сертификатов из списка доверия сертификатов Microsoft (CTL). Он также добавляет возможность получать информацию изображения, захваченного с помощью SigCheck в системе отключенной от интернета и получать статус от VirusTotal.

    Sysmon v3.21
    В этом релизе утилиты командной строки, которая отслеживает ключевые показатели деятельности системы через журнал событий Windows, теперь имеет возможность отслеживания диска и метки тома.

    Process Explorer v16.12
    Process Explorer теперь включает столбец представления дескриптора, которое сообщает о текстовой версии масок доступа дескриптора, а также несколько исправлений ошибок включая то, которое привело бы к приостановке .NET потоков.

    Autoruns v13.51
    В этом релизе утилиты исправлена ошибка парсинга командной строки WMI, выпускает BOM UNICODE в файле, сгенерированном при сохранении результатов к текстовому файлу и добавляет назад возможность выборочно проверить состояние подписания отдельных записей.

    AccessChk v6.01
    Данный релиз теперь имеет возможность обрабатывать учетные записи с длинными именами, исправляет ошибку, которая предотвратила создание отчетов доступов объекта ядра.

    Whois v1.13
    Whois, утилита командной строки, которая сообщает информацию о владельце доменного имени для указанного имени или IP-адреса, теперь содержит исправление ошибки, которая прекращала работу утилиты, в случае, если передавался IP-адрес без отображения DNS.

    RAMMap v1.5
    Это обновление утилиты, которая показывает подробную информацию об использовании физической памяти, дает возможность работы с последней версией Windows 10.

    Вы можете скачать пакет Sysinternals, в который уже включены все последние обновления утилит.


    0 0

    На Microsoft Virtual Academy за последние два месяца вышло несколько русскоязычных курсов для ИТ-специалистов, некоторые из них переведены с английского языка и содержат русские субтитры. Вы можете переключить субтитры в плеере MVA с помощью кнопки CC.

    Windows 10 в корпоративной сети - пройти курс  

    Windows 10 уже сравнительно давно вышла на рынок. Многие устройства с Windows 7 и 8.1 уже получили «десятку» в виде бесплатного обновления. Компании начинают присматриваться к новой ОС и оценивать ее потенциал для решения корпоративных задач. И если в перспективе вы планируете переход на Windows 10, самое время посмотреть на подходы и инструменты миграции, возможности по управлению устройствами с Windows 10, а также некоторые технологии обеспечения безопасности новой ОС. В курсе вы сможете познакомиться с особенностями миграции с Windows 7 и 8.1 на Windows 10, понять, когда и для чего можно использовать provisioning, какие средства помимо групповых политик могут применяться для управления Windows 10, увидеть, что собой представляет биометрическая аутентификация Windows Hello.

    Построение Linux кластеров в Azure - пройти курс

    Linux Cluster в Azure — множество виртуальных машин, работающих под управлением операционной системы Linux, размещённые в облачной системе IaaS-типа Microsoft Azure, и сконфигурированные для совместного решения одной задачи. В этом курсе рассматривается каким образом организовать работу кластера отказоустойчивости (high availability cluster) в облаке, какие задачи прежде всего необходимо для этого решить. Подробно рассмотрена настройка отказоустойчивого хранилища на основе DRBD, а также кластерного программного обеспечения Corosync и Pacemaker, предназначенного для организации слаженной работы узлов кластера. Кроме этого, на примере библиотеки STONITH подробно рассмотрена настройка системы огораживания сбойных узлов (fencing). Особое внимание уделяется настройке всех компонентов для работы в облачной инфраструктуре Azure, однако в большей части инструкции верны и для построения кластера и в системе традиционной архитектуры, без использования облака.

    Новые возможности предварительной версии Windows Server 2016 Preview - пройти курс

    Группа экспертов расскажет о множестве новых функций автоматизации, а также о поддержке технологий партнеров и ваших собственных отзывов в решениях с открытым исходным кодом. В этих сессиях рассматривается расширенная функциональность виртуализации, а также автоматизированные процессы и настройки, помогающие быстрее развертывать вычислительные ресурсы, ресурсы хранилищ и сетевые ресурсы. Познакомьтесь с новыми функциями, уменьшающими время простоя системы, узнайте, как последовательные обновления могут помочь ускорить адаптацию обновлений и операционных систем для сервера Hyper-V и масштабируемого файлового сервера, а также изучите новую технологию репликации хранилищ. Дополнительно будут представлены технология сервера Nano Server, занимающего очень мало места и оптимизированного для работы в облаке, и сценарии с новыми функциями настройки требуемого состояния (DSC) PowerShell.

    Безопасность в мире облаков - пройти курс

    Мы рассмотрим стратегию ответственности клиентов, представленную на плакате Безопасность в облаке Microsoft для корпоративных архитекторов, и дадим рекомендации по модернизации каждого аспекта средств обеспечения безопасности, включая управление, стратегии сдерживания, операции по обеспечению безопасности, защите ценных активов, защите информации и безопасности пользователей и устройств, при этом особое внимание удаляется защитному административному управлению. Учитесь на той же платформе, которую команда кибербезопасности Microsoft использует для оценки облачной безопасности клиентов и создания для них планов обеспечения безопасности.

    Windows Azure Pack: база данных как услуга (DBaaS) - пройти курс

    Эксперты познакомят вас с особенностями предложения DBaaS в составе Azure Pack, обеспечивающего мощную общую серверную часть SQL Server, которую пользователи могут использовать в режиме самообслуживания или программно с помощью богатого набора API-интерфейсов. Используя модель DBaaS, владельцы приложений и разработчики могут самостоятельно подготавливать и получать именно то, что требуется для их приложений и рабочих нагрузок.

    Подробное рассмотрение сценариев и моделей диспетчера ресурсов Azure - пройти курс

    В курсе кратко рассматриваются способы управления службами Azure, которые использовались до ARM, подробно рассматривается диспетчер ARM, приводятся ключевые сценарии использования ARM и изучаются методы эффективной работы с ARM. Измените и разверните некоторые из многих шаблонов быстрой настройки Azure с помощью Visual Studio или PowerShell и используйте управление доступом на основе ролей (RBAC) для реализации безопасности с помощью ARM. Получите много полезных сведений в этом информативном курсе.


    0 0

    Cloud computing is fast becoming a vital resource for addressing the world’s problems, as it helps to fuel breakthroughs across a range of economic and social challenges. At the same time, university researchers and nonprofits play a critical role in driving positive change in society. For these reasons we are committed to putting the Microsoft Cloud to work for the public good and Microsoft Philanthropies will be donating $1 billion in Microsoft cloud services to nonprofits and university researchers over the next three years. We believe that with access to the computational power provided by cloud services, these people and organisations can make an even greater positive impact on society. In addition, we are taking steps to bring cloud services to people who today lack affordable broadband access.

    This is all part of our mission to empower every person and every organisation on the planet to achieve more. When it comes to cloud services, there are three key areas on which we are focusing to make this a reality.


    1. Providing resources for the nonprofit community
    As nonprofits work to tackle society’s biggest challenges, they need resources that will support them and make it that much easier to do so. With that in mind, we plan to donate our cloud services to 70 000 of these organisations..

    Several nonprofits in the Middle East and Africa (MEA) have already benefited from software donations and are maximising the value of the technology to make a difference in their communities.

    Al Mandiya in Tunisia received $147 440 worth of software to create websites for all 264 municipalities in the country. This is changing the way citizens access information about their cities, while opening up job opportunities for community managers and providing more than 600 youth access to the resources.

    Also in Tunisia, Aroso is using Office 365 as it works to develop road safety programs for its more than 3000 users. Office 365 allows the non-profit to connect its different members, share best practices and structure its data using Sharepoint Online.
    In Egypt, Red Crescent received one of the largest grants equivalent to $1 440 430, which it is using to upgrade its 600 PCs, as well as its data centre.

    2. Expanding access to cloud resources for university researchers
    Another key area in tackling society’s issues is being able to conduct the research needed to understand the issues and how to address them. For this reason, we are expanding our Microsoft Azure for Research program, which offers free Azure storage and computing resources to help faculty accelerate their research.

    In MEA, Tunisia’s I Watch project is making use of our resources for its research. I Watch aims at enhancing transparency and fighting corruption, especially during election time. Through the innovative use of technology, they have simplified the data collection process and are able to crowdsource voter feedback.
    We aim to expand our support for research projects by 50% so that more initiatives like I Watch can make an impact.

    3. Reaching new communities
    If only the wealthy societies have access to data, intelligence, analytics and insights that come from the power of cloud computing, then we will find ourselves dealing with a whole new digital divide. The first step to ensuring that doesn’t happen is to find ways to connect more people, wherever they are. That’s why we are pursuing new initiatives to combine last-mile connectivity with donated access to our cloud services.

    Our TV White Spaces project is one of the ways in which we are doing this, and we’re excited about its potential to bring broadband connectivity at a low cost to more communities in MEA and around the world.

    By combining connectivity with cloud services and focusing on new public-private partnerships, our goal is to grow what we call our Affordable Access Initiative to 20 projects in at least 15 countries by the end of 2017.

    Addressing social challenges

    Cloud computing is one of the most important transformations of our time, and it has significant applications for health, education and development. Our goal is for put the power of cloud computing in the hands of those who are working to solve the world’s most pressing problems, and to bring it to those who today lack affordable access. That way, we can take an important step in our mission to empower every person and every organization on the planet to achieve more.


    0 0

    This one has been a long time in coming since my last blog post (almost 10 months to be precise) </grin>, but that's simply due to the amount of field deployments and engagements I have at hand. Finally am on my (well deserved) vacation from tomorrow, but going to spam the blog with a few posts before I head out with the family.

    My last post dealt with SOFS & Firewalls,and I'm following that up with how to design the storage connectivity in a highly secured deployment (for the paranoid). If you see the architecture below, it represents what you could actually apply in a Production scenario (without the need for firewalls :P )

    Network Architecture Diagram:

    What you see here is a deployment based on Dell Hardware with the C6220 as Compute, SFP+10G capable switches, and MD3060E JBOD's as the Storage; and Cisco N2K TOR switches, while the Cisco N5K serves as an EOR

    The way the network flow is structured is as follows:

    Note: This architecture is formulated from the Microsoft Hybrid Cloud Reference Architecture, but devolved specifically to cater to a high security zone deployment.

    1. We have created 4 VLANS on the N2K TOR switches to map to the Managed Nodes and SOFS. We are also Trunking the Z1 & Z2 VLANS, with multiple redundant links from the SOFS+JBOD's to the Managed Hyper-V nodes' VLANS.
    2. SOFS authenticates with AD in Z3 VLAN through the RDMA link
    3. SOFS RDMA link can ideally stay in the Z3 VLAN that we have created on the switch, but given the nature of the deployment, our recommendation is to have a separate Management Link to the Z4 VLAN, where the Master AD resides
    4. The Hyper-V Management Fabric traffic flows thrugh the same SFP+10G links, connecting to the Z4 VLAN. There is no need to have separate 1GE ports

    So what do you achieve with this? Quite a bit actually. You Decouple the Storage from the Compute, and lose the complexity of iSCSI or FC. At a high level, using commodity storage like JBOD's is what Hyper-Scale Cloud Service Providers like Microsoft do.

    What other benefits do you get?

    Let Windows do the work that was initially the fiefdom of expensive SAN technologies, you end up saving a TON of money, with more control.

    Data delivery via standard protocol:

    • SMB 3.0

    Load-balancing and failover:

    • Teaming (switch agnostic)

    Load aggregation and balancing:

    • SMB multi-channel

    Commodity L2 switching:

    • Cost effective networking (Ethernet)
    • RJ45
    • QSFP

    Quality of Service:

    • Multiple-levels

    Host workload overhead reduction:

    • RDMA

    Scale Easily thanks to Software Defined Networking, while getting more granular control.

    Hope this helps you to design, or even replicate some of the network topology in your environments (If you're feeling lazy :) ) based on what I've given above

    Cheers!


    0 0

    Today’s Tip…

    Microsoft’s Maker team has partnered with Adafruit to release a Raspberry Pi 2 Starter Kit.

    clip_image001

    This is a great way to learn about electronics, get into the Internet of Things, and experience Windows 10 IoT Core! And this is just the beginning! Expect more kits over the next year!

    Note: This pack can also be used with Raspbian Linux and Python.

    clip_image002

    Microsoft IoT Pack for Raspberry Pi 2 (including Raspberry Pi 2) - http://www.adafruit.com/windows10iotpi2

    Microsoft IoT Pack for Raspberry Pi 2 (just the kit, no Pi 2) - http://www.adafruit.com/products/2702

    Starter Pack includes:

    • Adafruit Raspberry Pi B+ Case - Smoke Base / Clear Top - We think it's the Single Greatest Raspberry Pi 2 Model B Case Ever - though our Pi Box Plus is also nothing to scoff at.
    • ​Full Size Breadboard - In the past, we've used the half-size breadboard for a lot of Pi projects - but no longer! With 40 pins to break out, you're going to need some space - and that's why we're including a full size breadboard in this pack.
    • Premium Male/Male Jumper Wires - 20 x 6" (150mm) - These jumper wires are great for making wire harness or jumpering between headers on PCBs.  We include the longer ones so they work well with the full-size breadboard.
    • Premium Female/Male 'Extension' Jumper Wires - 20x6" - These jumper wires are handy equivalents of the male/male jumper wires - but with female connectors.
    • Miniature Wi-Fi Module - Official Raspberry Pi Edition - The fancy, official, and adorable Wi-Fi Module, made by the Raspberry Pi foundation specifically for use with Raspberry Pi operating systems (and Windows 10).
    • 5V 2A Switching Power Supply w/ 6' MicroUSB Cable - The 5V 2A power adapter is the perfect choice for powering your Raspberry Pi B+ with 2 Amps of current output, and an extra long cord.
    • Assembled Adafruit BMP280 Temperature & Humidity sensor - The assembled version of an Adafruit Instant Classic.  This breakout board has a powerful BMP280 sensor from Bosh that's good for environmental, temperature, and barometric pressure sensing.  This version comes with headers already soldered on.
    • Assembled TCS34725 RGB Color Sensor - The assembled version of a longstanding Adafruit classic.  This RGB Color Sensor with IR filter and White LED comes fully assembled - with headers already soldered on.
    • MCP3008 - 8 Channel 10-Bit ADC with SPI Interface - Easy to use, SPI enabled chip that's perfect for adding 8 channels of 10-bit analog input to your microcontroller or microcomputer project.
    • Ethernet Cable - 5-foot-long - An Ethernet cable.
    • 8GB class 10 SD/MicroSD Memory Card w/ Windows 10 -IOT This SD card comes pre-loaded with all the Windows 10 goodies you can handle.
    • Electronic components
      • 1x Photo Cell
      • 2x Breadboard Trim Potentiometer
      • 5x 10K 5% 1/4W Resistor
      • 5x 560 ohm 5% 1/4W Resistor
      • 1x Diffused 10mm Blue LED
      • 1x Electrolytic Capacitor - 1.0uF
      • 1x Diffused 10mm Red LED
      • 1x Diffused 10mm Green LED
      • 3x 12mm Tactile Switches

    “Getting Started” Instructions and Samples - WindowsOnDevices.com

    Raspbian-Based Guides:


    0 0

     Gavin Payne is a principal architect for Coeo, a SQL Server and Azure professional services company, and a Microsoft Certified Architect and Microsoft Certified Master. His role is to guide and lead organisations through data platform transformation and cloud adoption programmes.

     

    Despite Microsoft continuously adding functionality to it, the Azure SQL Database service has always been missing one key feature – SQL Server Agent. The reality is, it’s absent rather than missing.  Platform as a Service capabilities are, and should be, promoting platform wide approaches to scheduling. So now is as good a time as ever for database developers and administrators to transition to a world where their favourite agent is missing.

     

    Azure SQL Database

    The Azure SQL Database service has had a bumpy few years, as well as a few names. The reason SQL Server is popular is because of its scalability and broad range of features. Wrapping that many capabilities into a fully managed API based service meant tough decisions. There were tears from the community but more significantly, there were roadblocks stopping adoption.

    In early 2016, life is different. The service’s latest release, V12, now has almost all of the core database engine’s functionality and a few exclusive extras. Global scalability, built-in high availability and a strong T-SQL language often make using it an obvious choice for developers.

     

    The use – and misuse – of SQL Server Agent

    SQL Server Agent is a bolt-on service to the SQL Server database engine. It’s a scheduling engine with (very) basic workflow capabilities. It knows enough about T-SQL, PowerShell, SSIS and operating system commands to manage them. If executing tasks wasn’t useful enough, it also has a notification and alerting engine. In summary, it’s useful.

    Its usefulness was its biggest downfall. A tool that was intended to schedule maintenance tasks often ends up being used as an application workflow engine. I often see tasks scheduled to run once a minute that empty idle shopping baskets or update the counters on management dashboards. Its simplicity lets organisations become dependent on what must be one of the most expensive scheduling and batch processing tools there is.

     

    New world thinking

    In the era of the cloud, developers are creating applications that use bundles of small pieces of functionality.  This helps them scale out using lots of cheap compute services.  These application tiers are where developers should deploy and execute scheduled tasks.  Even if they subsequently call database stored procedures that do all of the hard work.  The capabilities in Azure to do that there are far stronger, more appropriate and cheaper than using a database engine’s internal scheduling engine. 

    Azure scheduling options

    Azure provides the Azure Scheduler and Azure Automation services - for scheduling and automation. Neither are a like-for-like replacement for SQL Server Agent, but times have changed and we live in a new – better – world. 

    The Scheduler service in Azure is a simple service with a narrow set of capabilities. At a given time or on a given schedule, it’ll call an HTTP or HTTPS API, or post a message to a queue. It’s perfect for application environments where business logic can be executed using APIs. If this is how your application works – then schedule your tasks here. If you want to schedule database index maintenance or something else, keep reading.

    The Automation service in Azure is the primary tool for those needing to schedule any other tasks. Like the Azure Scheduler service, this is very different to SQL Server Agent, but this is the cloud PaaS world. The Azure Automation service uses runbooks to manage, schedule and define its jobs. These contain command script JSON documents that store PowerShell commands. 

    For those un-familiar with JSON and PowerShell, there’s a gallery of pre-written runbooks that can be used. Helpfully, for those looking to replace SQL Server Agent, there’s one that executes a T-SQL command. The tools are there, it just needs the time to set it up.

    Simplicity, however, is often traded for capability and the Azure Automation is perhaps sadly a good example of this. It uses an Azure Active Directory based security model that can take time to setup, it can’t schedule something to run less than once an hour and its best management interface is the Azure Portal. 

    However, I’ve seen application development teams who never thought to use it to schedule index maintenance for their Azure SQL Database service databases. For them, it was 30 seconds of clicks, copy and pasting, and scheduling. For database administrators, I recommend it’s something they add to their learning plan.

     

    Our missing agent

    It’s true that in the cloud world, our trusted SQL Server Agent feature is missing from the Azure SQL Database service. As worrying as this sounds, I suspect he’s been gone too long to be missed now.

    If you would like to learn more about Azure SQL Database then check out this Microsoft Virtual Academy course.


    0 0
  • 02/05/16--07:30: Azure Stack TP1 is available
  • Hi everybody!

    Last Friday Azure Stack Technical Preview 1 was published. Now anyone can install it in a lab environment and get "Azure in your own datacenter". To download Azure Stack TP1, go here.

    Technical documentation regarding the deployment is available here.

    We've already started testing Azure Stack in a lab environment. So expect a lot of information about Azure Stack soon. First advice - don’t try to cheat with Hardware Requirements :)


    0 0

    Reporting of SPLA licenses usually was a challenge for big service providers. Some service providers collect the list of needed SPLA licenses manually every month, others use 3rd-party tools like Odin Service Automation to automate this task.

    But recently Microsoft launched an early adoption program of  a set of tools, which automate and simplify SPLA reporting. There is no official name of this toolkit, so I will call it SPLA Reporting. It consists of 3 components:

    1. Software Inventory Logging (SIL) - component of Windows Server, that collects information about OS edition, server hardware and installed software to a single file, and sends it an aggregator periodically. SIL was added to Windows Server 2012 R2 with November 2014 update rollup, so most likely you already have it. Also SIL is available for install on Windows Server 2008 R2 SP1 and Windows Server 2012 nonR2.
    2. Software Inventory Logging Aggregator (SILA) - special software, that must be installed on Windows Server 2012 R2 machine in service provider environment. SILA does 2 jobs. First of all, it queries Hyper-V, vSphere and Xen hosts and pulls information about their hardware configuration (number of CPUs, Cores, RAM etc). Also it adds information from SIL inventory files (that SIL agents send to it) to the database. After that, all inventory information is used to create a report (Excel spreadsheet).
    3. SPLAReport - portal on Microsoft servers, that is used to convert inventory data (OS versions, number of CPUs, Cores etc) to a list of SPLA SKUs. It uses an internal logic, based on SPUR - rules of Microsoft products licensing in service provider environments. You must upload SILA report every month to SPLAReport portal, add end customer enrollment number if needed, specify licenses, which are used under License Mobility rights etc. Final version of the SPLA SKU list is being send to your SPLA Reseller (SPLAR) after you submit the specification.

     What you need to know about this solution:

    • SILA collects data regarding Windows Server, SQL Server and System Center licenses only. Other Microsoft products like Exchange, SharePoint etc. must be added to the list on SPLAReport portal manually.
    • To create a complete inventory report, you must add all virtualized hosts to SILA and configure SIL on all Windows Server guest VMs and non-virtualized servers. Otherwise you'll need to add this information to the report manually.
    • No information is being sent to Microsoft. SIL or SILA don't send any information outside your organization. SPLAReport sends information directly to SPLAR.
    • SIL uses certificate-based authentication on SILA. All the traffic between SIL and SILA is encrypted by SSL.
    • SILA is available as a free download, anybody can install it. Access to SPLAReport must be requested via your representative in Microsoft hosting team.
    • It is only about Microsoft products. So Linux, Oracle, IBM are absolutely parallel to SPLA Reporting.
    • Windows Server 2008 nonR2 and earlier versions are not supported.

    Here is a good explanation of data flows between SIL and SILA:

    At the high level, usual SPLA Reporting deployment will look like this:

    1. Request your representative in Microsoft hosting team to create you an account on SPLAReport portal. You need to provide your Microsoft ID (that will be used to login) and your SPLAR contact information.
    2. Install and configure SILA on a VM in your environment.
    3. Modify VM templates with Windows Sever - enable and configure SIL.
    4. Configure SIL on the already created VMs with Windows Server.
    5. Configure SIL on non-virtualized Windows Server.
    6. Create a report on SILA server and upload it to SPLAReport portal.

    I will explain these steps in details in my next blog posts. Stay tuned!


    0 0

    Hello, Wiki Ninjas!

    Today is Friday with International Community Update.

    The end of January is as follows:

    The topic of this month:

    • Turkish has contributed more than 100 articles!  It's first time in 16 months.
    • Portuguese has reduced the big number of articles.
    • No order change.

    Thank you!!


    Tomoaki Yoshizawa (yottun8)
    Blog: blog.yottun8.com
    Facebook: Tomoaki Yoshizawa
    twitter: @yottun8
    TechNet Profile: Tomoaki Yoshizawa


    0 0

    Melissa Mark-Vivertio, Vocera del Consejo de la Ciudad de Nueva York, y Carmen Fariña, Canciller de las Escuelas de la Ciudad de Nueva York, anunciaron que Microsoft brindará copias gratuitas de Office 365 Education a los estudiantes de la Ciudad de Nueva York y a sus familias. Los estudiantes y sus familias podrán descargar hasta cinco copias de Office 2016, Office para Mac 2016 y las aplicaciones móviles de Office 365 disponibles para tabletas Windows 10, iPhone,...(read more)

    0 0

    Olá amigos da comunidade Wiki Ninja Brasil.

    Sejam bem-vindos à mais uma atualização internacional.

    Nosso destaque de hoje vem do Canadá.



    Vamos falar de KEN CENERELLI



    Ele é membro da comunidade desde 2011.

    Ganhador do TechNet Guru e MCC.

    MVP em Visual Studio and Development Technologies.


    Essa é a lista de alguns dos diversos artigos que ele escreveu para a comunidade.

    Using Microsoft Application Insights in an MVC application

    Wiki: How to Subscribe to the Wiki Ninjas Blog through RSS in Outlook 2013

    How to enable line numbers for C# in Visual Studio 2013

    Using the Checked and Unchecked keywords in C# to perform overflow checking

    User Page: Ken Cenerelli

    Azure Infographics and Visio Templates

    Microsoft Azure Essentials: Free E-Book Series

    Availability Testing With Microsoft Application Insights

    Azure portal keyboard shortcuts

    Azure PowerShell cmdlets version updates

    Custom Telemetry Events with TrackEvent in Microsoft Application Insights

    Using Microsoft Application Insights in an MVC application

    Creating a Microsoft Application Insights resource

    List Services With PowerShell

    Using the Obsolete Attribute in C#

    Create GUID Tool in Visual Studio

    Understanding the Visual Studio AssemblyInfo Class

    Namespace Aliases in C#

    C# Escape Characters

    Toolbox searching in Visual Studio 2013

    Using the Checked and Unchecked keywords in C# to perform overflow checking

    How to enable line numbers for C# in Visual Studio 2013



    Convido a comunidade a dar os parabéns ao Ken Cenerelli´s.

    Obrigado por todas as suas contribuições.


    Wiki Ninja Hezequias Vasconcelos@++





    0 0

    rwagg-white small

    Rob Waggoner

    Windows10_rgb_Blue_D

    Partners,

    Up until now, if your customer had a domain joined computer, the Windows 10 upgrade app did not ask your users to upgrade to Windows 10.  This will be changing soon and I want to make sure you are ready for the change.   We announced the change here, and we even documented how you can prevent this upgrade from impacting your customers here.  Your action is to make sure you have a plan for your customers Windows 10 deployments.  If you want the automatic upgrade, do nothing and the user will be prompted by our Windows 10 upgrade app.  If you have a schedule of your own, you can review the guidance in our knowledge base article on how to prevent the Windows 10 app from running on your clients machines. 

    As I mentioned in this blog, If you would rather do a fresh install, as opposed to an in-place upgrade, you can now leverage your Windows 7 & Windows 8.x keys to activate Windows 10.  Keep in mind that the free Windows 10 upgrade expires July 29, 2016

    Until next time,

    Rob

     


older | 1 | .... | 834 | 835 | (Page 836) | 837 | 838 | .... | 889 | newer