Mobility
70 TopicsIntroducing: Log Parser Studio
To download the Log Parser Studio, please see the attachment on this blog post. Anyone who regularly uses Log Parser 2.2 knows just how useful and powerful it can be for obtaining valuable information from IIS (Internet Information Server) and other logs. In addition, adding the power of SQL allows explicit searching of gigabytes of logs returning only the data that is needed while filtering out the noise. The only thing missing is a great graphical user interface (GUI) to function as a front-end to Log Parser and a ‘Query Library’ in order to manage all those great queries and scripts that one builds up over time. Log Parser Studio was created to fulfill this need; by allowing those who use Log Parser 2.2 (and even those who don’t due to lack of an interface) to work faster and more efficiently to get to the data they need with less “fiddling” with scripts and folders full of queries. With Log Parser Studio (LPS for short) we can house all of our queries in a central location. We can edit and create new queries in the ‘Query Editor’ and save them for later. We can search for queries using free text search as well as export and import both libraries and queries in different formats allowing for easy collaboration as well as storing multiple types of separate libraries for different protocols. Processing Logs for Exchange Protocols We all know this very well: processing logs for different Exchange protocols is a time consuming task. In the absence of special purpose tools, it becomes a tedious task for an Exchange Administrator to sift thru those logs and process them using Log Parser (or some other tool), if output format is important. You also need expertise in writing those SQL queries. You can also use special purpose scripts that one can find on the web and then analyze the output to make some sense of out of those lengthy logs. Log Parser Studio is mainly designed for quick and easy processing of different logs for Exchange protocols. Once you launch it, you’ll notice tabs for different Exchange protocols, i.e. Microsoft Exchange ActiveSync (MAS), Exchange Web Services (EWS), Outlook Web App (OWA/HTTP) and others. Under those tabs there are tens of SQL queries written for specific purposes (description and other particulars of a query are also available in the main UI), which can be run by just one click! Let’s get into the specifics of some of the cool features of Log Parser Studio … Query Library and Management Upon launching LPS, the first thing you will see is the Query Library preloaded with queries. This is where we manage all of our queries. The library is always available by clicking on the Library tab. You can load a query for review or execution using several methods. The easiest method is to simply select the query in the list and double-click it. Upon doing so the query will auto-open in its own Query tab. The Query Library is home base for queries. All queries maintained by LPS are stored in this library. There are easy controls to quickly locate desired queries & mark them as favorites for quick access later. Library Recovery The initial library that ships with LPS is embedded in the application and created upon install. If you ever delete, corrupt or lose the library you can easily reset back to the original by using the recover library feature (Options | Recover Library). When recovering the library all existing queries will be deleted. If you have custom/modified queries that you do not want to lose, you should export those first, then after recovering the default set of queries, you can merge them back into LPS. Import/Export Depending on your need, the entire library or subsets of the library can be imported and exported either as the default LPS XML format or as SQL queries. For example, if you have a folder full of Log Parser SQL queries, you can import some or all of them into LPS’s library. Usually, the only thing you will need to do after the import is make a few adjustments. All LPS needs is the base SQL query and to swap out the filename references with ‘[LOGFILEPATH]’ and/or ‘[OUTFILEPATH]’ as discussed in detail in the PDF manual included with the tool (you can access it via LPS | Help | Documentation). Queries Remember that a well-written structured query makes all the difference between a successful query that returns the concise information you need vs. a subpar query which taxes your system, returns much more information than you actually need and in some cases crashes the application. The art of creating great SQL/Log Parser queries is outside the scope of this post, however all of the queries included with LPS have been written to achieve the most concise results while returning the fewest records. Knowing what you want and how to get it with the least number of rows returned is the key! Batch Jobs and Multithreading You’ll find that LPS in combination with Log Parser 2.2 is a very powerful tool. However, if all you could do was run a single query at a time and wait for the results, you probably wouldn’t be making near as much progress as you could be. In lieu of this LPS contains both batch jobs and multithreaded queries. A batch job is simply a collection of predefined queries that can all be executed with the press of a single button. From within the Batch Manager you can remove any single or all queries as well as execute them. You can also execute them by clicking the Run Multiple Queries button or the Execute button in the Batch Manager. Upon execution, LPS will prepare and execute each query in the batch. By default LPS will send ALL queries to Log Parser 2.2 as soon as each is prepared. This is where multithreading works in our favor. For example, if we have 50 queries setup as a batch job and execute the job, we’ll have 50 threads in the background all working with Log Parser simultaneously leaving the user free to work with other queries. As each job finishes the results are passed back to the grid or the CSV output based on the query type. Even in this scenario you can continue to work with other queries, search, modify and execute. As each query completes its thread is retired and its resources freed. These threads are managed very efficiently in the background so there should be no issue running multiple queries at once. Now what if we did want the queries in the batch to run concurrently for performance or other reasons? This functionality is already built-into LPS’s options. Just make the change in LPS | Options | Preferences by checking the ‘Process Batch Queries in Sequence’ checkbox. When checked, the first query in the batch is executed and the next query will not begin until the first one is complete. This process will continue until the last query in the batch has been executed. Automation In conjunction with batch jobs, automation allows unattended scheduled automation of batch jobs. For example we can create a scheduled task that will automatically run a chosen batch job which also operates on a separate set of custom folders. This process requires two components, a folder list file (.FLD) and a batch list file (.XML). We create these ahead of time from within LPS. For more details on how to do that, please refer to the manual. Charts Many queries that return data to the Result Grid can be charted using the built-in charting feature. The basic requirements for charts are the same as Log Parser 2.2, i.e. The first column in the grid may be any data type (string, number etc.) The second column must be some type of number (Integer, Double, Decimal), Strings are not allowed Keep the above requirements in mind when creating your own queries so that you will consciously write the query to include a number for column two. To generate a chart click the chart button after a query has completed. For #2 above, even if you forgot to do so, you can drag any numbered column and drop it in the second column after the fact. This way if you have multiple numbered columns, you can simply drag the one that you’re interested in, into second column and generate different charts from the same data. Again, for more details on charting feature, please refer to the manual. Keyboard Shortcuts/Commands There are multiple keyboard shortcuts built-in to LPS. You can view the list anytime while using LPS by clicking LPS | Help | Keyboard Shortcuts. The currently included shortcuts are as follows: Shortcut What it does CTRL+N Start a new query. CTRL+S Save active query in library or query tab depending on which has focus. CTRL+Q Open library window. CTRL+B Add selected query in library to batch. ALT+B Open Batch Manager. CTRL+B Add the selected queries to batch. CTRL+D Duplicates the current active query to a new tab. CTRL+ALT+E Open the error log if one exists. CTRL+E Export current selected query results to CSV. ALT+F Add selected query in library to the favorites list. CTRL+ALT+L Open the raw Library in the first available text editor. CTRL+F5 Reload the Library from disk. F5 Execute active query. F2 Edit name/description of currently selected query in the Library. F3 Display the list of IIS fields. Supported Input and Output types Log Parser 2.2 has the ability to query multiple types of logs. Since LPS is a work in progress, only the most used types are currently available. Additional input and output types will be added when possible in upcoming versions or updates. Supported Input Types Full support for W3SVC/IIS, CSV, HTTP Error and basic support for all built-in Log Parser 2.2 input formats. In addition, some custom written LPS formats such as Microsoft Exchange specific formats that are not available with the default Log Parser 2.2 install. Supported Output Types CSV and TXT are the currently supported output file types. Log Parser Studio - Quick Start Guide Want to skip all the details & just run some queries right now? Start here … The very first thing Log Parser Studio needs to know is where the log files are, and the default location that you would like any queries that export their results as CSV files to be saved. 1. Setup your default CSV output path: a. Go to LPS | Options | Preferences | Default Output Path. b. Browse to and select the folder you would like to use for exported results. c. Click Apply. d. Any queries that export CSV files will now be saved in this folder. NOTE: If you forget to set this path before you start the CSV files will be saved in %AppData%\Microsoft\Log Parser Studio by default but it is recommended that y ou move this to another location. 2. Tell LPS where the log files are by opening the Log File Manager. If you try to run a query before completing this step LPS will prompt and ask you to set the log path. Upon clicking OK on that prompt, you are presented with the Log File Manager. Click Add Folder to add a folder or Add File to add a single or multiple files. When adding a folder you still must select at least one file so LPS will know which type of log we are working with. When doing so, LPS will automatically turn this into a wildcard (*.xxx) Indicating that all matching logs in the folder will be searched. You can easily tell which folder or files are currently being searched by examining the status bar at the bottom-right of Log Parser Studio. To see the full path, roll your mouse over the status bar. NOTE: LPS and Log Parser handle multiple types of logs and objects that can be queried. It is important to remember that the type of log you are querying must match the query you are performing. In other words, when running a query that expects IIS logs, only IIS logs should be selected in the File Manager. Failure to do this (it’s easy to forget) will result errors or unexpected behavior will be returned when running the query. 3. Choose a query from the library and run it: a. Click the Library tab if it isn’t already selected. b. Choose a query in the list and double-click it. This will open the query in its own tab. c. Click the Run Single Query button to execute the query The query execution will begin in the background. Once the query has completed there are two possible outputs targets; the result grid in the top half of the query tab or a CSV file. Some queries return to the grid while other more memory intensive queries are saved to CSV. As a general rule queries that may return very large result sets are probably best served going to a CSV file for further processing in Excel. Once you have the results there are many features for working with those results. For more details, please refer to the manual. Have fun with Log Parser Studio! & always remember – There’s a query for that! Kary Wall Escalation Engineer Microsoft Exchange Support431KViews8likes37CommentsImproving Security - Together
For many years, client apps have used Basic Authentication to connect to servers, services and endpoints. It is enabled by default on most servers and services and it’s super simple to set up. Simplicity isn’t at all bad in itself, but Basic Authentication makes it easier for attackers to capture user’s credentials and so we’re taking steps to improve data security in Exchange Online.281KViews23likes148CommentsMicrosoft and Apple Working Together to Improve Exchange Online Security
Today we’re delighted to take the next step along the journey to transition away from basic auth by sharing the work we’ve been doing with Apple to help users of their Mail app switch from Basic auth to Modern auth.112KViews15likes76CommentsLog Parser Studio 2.0 is now available
Since the initial release of Log Parser Studio (LPS) there have been over 30,000 downloads and thousands of customers use the tool on a daily basis. In Exchange support many of our engineers use the tool to solve real world issues every day and in turn share with our customers, empowering them to solve the same issues themselves moving forward. LPS is still an active work in progress; based on both engineer and customer feedback many improvements have been made with multiple features added during the last year. Below is a short list of new features: Improved import/export functionality For those who create their own queries this is a real time-saver. We can now import from multiple XML files simultaneously only choosing the queries we wish to import from multiple query libraries or XML files. Search Query Results The existing feature allowing searching of queries in the library is now context aware meaning if you have a completed query in the query window, the search option searches that query. If you are in the library it searches the library and so on. This allows drilling down into existing query results without having to run a new query if all you want to do is narrow down existing result sets. Input/Output Format Support All LP 2.2 Input and Output formats contain preliminary support in LPS. Each format has its own property window containing all known LP 2.2 settings which can be modified to your liking. Exchange Extensible Logging Support Custom parser support was added for most all Exchange logs. These are covered by the EEL and EELX log formats included in LPS which cover Exchange logs from Exchange 2003 through Exchange 2013. Query Logging I can't tell you how many times myself or another engineer spent lots of time creating the perfect query for a particular issue we were troubleshooting, forgetting to save the query in the heat of the moment and losing all that work. No longer! We now have the capability to log every query that is executed to a text file (Query.log). What makes this so valuable is if you ran it, you can retrieve it. Queries There are now over 170 queries in the library including new sample queries for Exchange 2013. PowerShell Export You can now export any query as a standalone PowerShell script. The only requirement of course is that Log Parser 2.2 is installed on the machine you run it on but LPS is not required. There are some limitations but you can essentially use LPS as a query editor/test bed for PowerShell scripts that run Log Parser queries for you! Query Cancellation The ability to submit a request to cancel a running query has been added which will allow you to cancel a running query in many cases. Keyboard Shortcuts There are now 23 Keyboard shortcuts. Be sure to check these out as they will save you lots of time. To display the short cuts use CTRL+K or Help > Keyboard Shortcuts. There are literally hundreds of improvements and features; far too many to list here so be sure and check out our blog series with existing and upcoming tutorials, deep dives and more. If you are installing LPS for the first time you'll surely want to review the getting started series: Getting started with Log Parser Studio Getting started with Log Parser Studio, part 2 Getting started with Log Parser Studio, part 3 If you are already familiar with LPS and are installing this latest version, you'll want to check out the upgrade blog post here: Log Parser Studio: upgrading from v1 to v2 Additional LPS articles can be found here: http://blogs.technet.com/b/karywa/ LPS doesn't require an install so just extract to the folder of your choice and run LPS.EXE. If you have the previous version of LPS and you have added your own custom queries to the library, be sure to export those queries as a backup before running the newest version. See the "Upgrading to LPS V2" blog post above when upgrading. Kary Wall574KViews4likes18CommentsDemystifying Certificate Based Authentication with ActiveSync in Exchange 2013 and 2016 (On-Premises)
Some of the more complicated support calls we see are related to Certificate Based Authentication (CBA) with ActiveSync. This post is intended to provide some clarifications of this topic and give you troubleshooting tips. What is Certificate Based Authentication (CBA)? Instead of using Basic or WIA (Windows Integrated Authentication), the device will have a client (user) certificate installed, which will be used for authentication. The user will no longer have to save a password to authenticate with Exchange. This is not related to using SSL to connect to the server as we assume that you already have SSL setup. Also, just to be clear (as some people have those things confused) CBA is not two-factor authentication (2FA). How does the client certificate get installed on the device? There’s several MDM (Mobile Device Management) solutions to install the client certificate on the device. The most important part of working with CBA is to know where the client certificate will be accepted (or ‘terminated’). How you implement CBA will depend on the response to following questions: Will Exchange server be accepting the client certificate? Will an MDM or other device using Kerberos Constrained Delegation (KCD) be accepting the client certificate? You can choose only one. You can’t have both Exchange and a device accepting the client certificate. This post assumes that the user certificates have already been deployed in AD before CBA was implemented. The requirements for user certificates are documented here: Configure certificate based authentication in Exchange 2016. If Exchange Server is accepting the client certificate This configuration is simple and is fully documented in the following link that applies to Exchange 2013/2016. The configuration for legacy versions follows the IIS configuration steps. The overall functionality of CBA has not changed across versions however the requirements may vary. Configuration of CBA is done via IIS Manager. The overall steps are: Installing Client Certificate Mapping Authentication feature on all CAS servers, enabling client certificate authentication, setting SSL client certificates to “required” and disabling other authentication methods and finally enabling client certificate mapping on the virtual directory, Configure certificate based authentication in Exchange 2016 Configure certificate based authentication in Legacy versions Important Notes: You cannot use multiple authentication methods and have client certificates enabled on the virtual directory. The client must either use client certificate or username and password to authenticate, not both. SSL settings should be set to “Require” not “Accept”. You can have connection failures if set improperly. If MDM or another device is accepting the certificate and using KCD to authenticate the client device What is important to note here is the client certificate will be accepted at the device, therefore, you would NOT configure client certificates on Exchange. Each vendor should have updated documentation to work with current Exchange version. To accept the client certificate, the MDM would require that KCD be configured to authenticate to Active Directory. Most vendors expect Windows Integrated Authentication configured on IIS/Exchange. This would allow the authentication to be passed without any additional prompts to the client device. All other authentication methods would be disabled. Note: When you enable Integrated authentication on Exchange, you should ensure that the authentication “Providers” have both NTLM and Negotiate enabled in IIS Manager. Overall authentication process when client certificate is accepted by MDM: The client device contacts MDM with a client certificate that contains UPN in the Subject Alternative Name section of the certificate The MDM authenticates the user with Active Directory KCD issues a Ticket to the MDM with user’s credentials MDM sends users credentials to Exchange with Windows Integrated (only) configured on Exchange. Exchange response to the MDM with the mail data. MDM responds to the client with mail data. Coexistence with Exchange, when Exchange is accepting the client certificate When adding 2013/2016 to the environment and Exchange server 2013/2016 is accepting the client certificate, it’s important to disable any client certificate configuration on the legacy CAS. This is because the client certificate will not be proxied to the legacy server. The authentication on Legacy CAS would go back to default of Basic on “Microsoft-Server-ActiveSync” virtual directory, and “Windows Integrated” on subfolder named “Proxy”. Troubleshooting Here are some troubleshooting steps! If Exchange Server is accepting the client certificate If Exchange is configured to accept the client certificate, use the IIS logs and look for requests for /Microsoft-Server-ActiveSync. Determine the error code that is returned. IIS error codes are found here. Verify the UPN configured on the “Subject Alternative Name” portion of the client certificate. In ADUC, click “View, Advanced Features”, locate the user account and select “Published Certificates”, click “Details” tab. Client certificates and SSL “Required” should not be enabled on the Default Web Site, only on the MSAS “Microsoft-Server-ActiveSync” virtual directory. Verify there are no additional authentication methods enabled on the MSAS virtual directory. See “Step 4” in Configure certificate based authentication in Exchange 2016 If MDM is accepting client certificate With MDM vendor, verify that KCD is working correctly, by checking security logs on MDM to verify Kerberos is working. Verify if the request is getting to Exchange by looking at the IIS logs requests for /Microsoft-Server-ActiveSync. Verify Windows Integrated (only) is enabled on Exchange. Attachments If users have issues with attachments, follow “Step 7” in Configure certificate based authentication in Exchange 2016 Troubleshooting Logs and Tools Use the IIS logs to determine if the device reached the Exchange server. Look for requests to /Microsoft-Server-ActiveSync virtual directory. Refer to The HTTP status code in IIS 7.0, IIS 7.5, and IIS 8.0 KB for information on the various error codes in the IIS logs. Example of IIS error code: 403.7 - Client certificate required. From this you would verify that the device has the client certificate installed. IIS Logs - IIS logs can be used to review the connection for Microsoft-Server-ActiveSync. More info here. Log Parser Studio - Log Parser Studio is a GUI for Log Parser 2.2. LPS greatly reduces complexity when parsing logs. Download it here I wanted to thank Jim Martin for technical review of this post. Charlene Stephens (Weber)35KViews0likes4CommentsControlling Exchange ActiveSync device access using the Allow/Block/Quarantine list
What is the Allow/Block/Quarantine list? In Exchange 2010 we added a feature called the Allow/Block/Quarantine list (or ABQ for short). This feature was designed to help IT organizations control which of the growing number of Exchange ActiveSync-enabled devices are allowed to connect to their Exchange Servers. With this feature, organizations can choose which devices (or families of devices) can connect using Exchange ActiveSync (and conversely, which are blocked or quarantined). Some of you may remember my previous post on this topic dealing with organizations that do not have Exchange 2010 and thus I wanted to show you the far better way you can do this in Exchange 2010 (which is also what you will see in Office 365 and Exchange Online if you are looking at our cloud-based offerings). It is important to understand that the ABQ list is not meant to displace policy controls implemented using Exchange ActiveSync policies. Policy controls allow you to control and manage device features (such as remote wipe, PIN passwords, encryption, camera blocking, etc.) whereas the ABQ list is about controlling which devices are allowed to connect (for example, there may be a lot of devices that support EAS PIN policies, but some IT departments only want to allow certain devices to connect to limit support or testing costs). The easy takeaway is that Exchange ActiveSync policies allow you to limit device access by capabilities while the Allow/Block/Quarantine list allows you to control device access by device type. If you're curious as to what devices OS support which policies, the Wikipedia article we blogged about is a good place to look. Different device access models for different folks When we designed the ABQ list, we talked to a lot of organizations to find out how all of you use (or wanted to use) this kind of technology. What we realized is that there is a continuum of organizations; from permissive organizations that let employees connect whatever device they have to their Exchange Server, all the way to restrictive organizations that only support specific devices. Since we always want to make our software as flexible for IT as possible (as we know there are a lot of you folks that are using our software in a lot of different ways) we created this feature so that no matter which type of organization you are (or even if you are one that is in between these two extremes) we could help meet your needs. Below are some descriptions and "how-to"s for using the ABQ list in these different ways. The restrictive organization Restrictive organizations follow a more traditional design where only a set of supported devices is allowed to connect to the Exchange server. In this case, the IT department will only choose to allow the particular devices they support and all other devices are blocked. It's important to note that a restrictive organization is created by specifying a set of allowed devices and blocking the unknown. The permissive organization: Permissive organizations allow all (or most) to connect to their Exchange Server. In these cases, the ABQ list can help organizations block a particular device or set of devices from connecting. This is useful if there's a security vulnerability or if the device is putting a particularly heavy load on the Exchange server. In these cases, the IT department can identify the misbehaving device and block that device until a fix or update for that device brings it into compliance. All other devices, including the unknown devices, are given access. The one off case: Of course, if you are limiting the devices that connect to your organization, there's almost always a need for an exception. Whether it's testing a new device before rolling it out to the organization as a supported device, or an exception made for an executive, we wanted to give you the ability to make an exception without allowing all users with that device to access your organization's email and PIM data. When to quarantine: Quarantining devices is useful when an IT department wants to monitor new devices connecting to their organization. Both permissive and restrictive organizations may choose to employ this mechanism. In a permissive organization, quarantine can be used so that IT administrators know what devices, and which users, are making new connections. In restrictive organizations, this can be used to see who is trying to work around policy and also gauge demand from "Bring Your Own Device" (BYOD) users. Now that we've gone through the theory, let's talk about how we would do this in practice. Accessing the ABQ settings: Log in to the Exchange Control Panel (ECP) (you can also access the ECP from Outlook Web App (OWA) by selecting Options > See all options) In the ECP , make sure you are managing My Organization (#1 in the screenshot below). Be aware that most users won't see the "My Organization" option — it's only visible to users with Exchange Administrator access. Select Phone & Voice (#2 in the screenshot below) > ActiveSync Access tab (#3 in the screenshot below). This is the Allow/Block/Quarantine configuration screen. Note for all you Exchange Management Shell (EMS) gurus, you can also configure device access using PowerShell cmdlets if you prefer. Creating a device (or a family of devices) rule: To create a new rule, select New from the Device Access Rules section of the ABQ page (#5 in the screenshot above). When setting up a rule for a device, it is important to understand the difference between the "family" of the device and the specific device. This information is communicated as part of the EAS protocol and is reported by the device itself. In general, you can think of the deivce rule as applying only to the particular device type (like an HTC-ST7377 as shown in the image below) whereas a device family might be something more broad like "Pocket PC". This distinction between the specific (device type) and the general (device family) is important since many device manufacturers actually release the same device with different names on different carriers. To make it so that you don't have to make a separate rule for each device. For instance, the HTC Touch Pro was available on all four majour US carriers as well as some of the regional ones, and that's just the USA, not to mention the other versions around the world. As you can see, making a rule for each of those different devices (which are all in the same family and effectively the same device) could mean a lot of extra work for IT, so we added the family grouping to help you make good decisions about devices in bulk. It's important to note that when making a new rule you select the device family or the model but not both. Once you've selected the device or a device family, you can then choose what Exchange will do with that device (in this example, I'm just going to do a specific device). This brings you to the New Device Access Rule page. The easiest way to set the rule is to select Browse, which will show you a list of all the devices or device families that have recently connected to your Exchange Server. Once you've selected the device or family, you can choose the action to take. This is where you can choose to block the device if you are a permissive organization looking to limit a specific device for a specific reason or where you can set access rules if you are a restrictive organization (in such a case you would just create an allow rule for each supported device and then set the state for all unknown devices to block (we'll talk about how to set the action for unknown devices in the next section)). Once you select the action (Allow access, Block access, or Quarantine), click Save and you're done! You can repeat this process for each rule you want to create. You can also have both block and allow rules simultaneously. Setting up a rule for unknown devices: To access the rule for unknown devices, select Edit (#4 in Figure 5 above). On the Exchange ActiveSync Settings page, you can configure the action to take when Exchange sees a user trying to connect with a device that it does not recognize. By default, Exchange allows connections from all devices for users that are enabled for EAS . This example configures the Exchange organization to quarantine all unknown devices. This means that if there's no rule for the device (or device family) or if there's no exception for the particular user, then an unknown device will follow this behavior. Quarantine notifications We have the ability to specify who gets an email alert when a device is placed in quarantine. You can add one or more administrators (or users) or even a distribution group to this list of notified individuals. Anyone on this list will receive an email like the one shown in the screenshot below. The notification provides you information about who tried to connect the device, the device details and when the attempt was made. Custom quarantine message You can also set a custom message that will be delivered to the user in their Inbox and on their device. Although the device is in quarantine, we send this one message to the device so the user doesn't automatically call help desk because their device isn't syncing. The custom message is added to the notification email to the user that their device is in quarantine. The user and device will also now appear on the Quarantined Devices list on the ABQ configuration page. Managing Quarantined Devices The device will stay in quarantine until an administrator decides to allow or block the device in quarantine. This can be done by selecting the device and then clicking on the Allow or Block buttons in Quarantined Devices. This creates a personal exemption (the "one off case" mentioned earlier). If you wish to create an access rule that is to apply to all devices of the same family or model, you can select Create a rule for similar devices to open a new, prepopulated, rule. Making changes: Of course we realize that many organizations are dynamic and have changing requirements and policies. Any of the rules that have been set up can be changed dynamically by accessing the ABQ page in the ECP and editing, deleting, or adding the desired rule. Adam Glick (@MobileGlick) Sr. Technical Product Manager P.S. To read about Microsoft's licensing of Exchange ActiveSync, check out this article on Microsoft NewsCenter. Julia White also put up a more business focused blog in the UC Blog about the importance of EAS to Exchange 2010 customers.292KViews0likes31CommentsLife in a Post TMG World – Is It As Scary As You Think?
Let’s start this post about Exchange with a common question: Now that Microsoft has stopped selling TMG, should I rip it out and find something else to publish Exchange with? I have occasionally tried to answer this question with an analogy. Let’s try it. My car (let’s call it Threat Management Gateway, or TMG for short), isn’t actively developed or sold any more (like TMG). However, it (TMG) works fine right now, it does what I need (publishes Exchange securely) and I can get parts for it and have it serviced as needed (extended support for TMG ends 2020) and so I ‘m keeping it. When it eventually either doesn’t meet my requirements (I want to publish something it can’t do) or runs out of life (2020, but it could be later if I am ok to accept the risk of no support) then I’ll replace it. Now, it might seem odd to offer up a car analogy to explain why Microsoft no longer selling TMG is not a reason for Exchange customers to panic, but I hope you’ll agree, it works, and leads you to conclude that when something stops being sold, like your car, it doesn’t immediately mean you replace it, but instead think about the situation and decide what to do next. You might well decide to go ahead and replace TMG simply based on our decision to stop selling or updating it, that’s fine, but just make sure you are thinking the decision through. Of course, you might also decide not to buy another car. Your needs have changed. Think about that. Here are some interesting Exchange-related facts to help further cement the idea I’m eventually going to get to. We do not require traffic to be authenticated prior to hitting services in front of Exchange Online. We do not do any form of pre-authentication of services in front of our corporate, on-premises messaging deployments either. We have spent an awfully large amount of time as a company working on securing our code, writing secure code, testing our code for security, and understanding the threats that exist to our code. This is why we feel confident enough to do #1 and #2. We have come to learn that adding layers of security often adds little additional security, but certainly lots of complexity. We have invested in getting our policies right and monitoring our systems. This basically says we didn’t buy another car when ours didn’t meet our needs any more. We don’t use TMG to protect ourselves any more. Why did we decide that? To explain that, you have to cast your mind back to the days of Exchange and Windows 2000. The first thing to admit is that our code was less ‘optimal’ (that’s a polite way of putting it), and there were security issues caused by anonymous access. So, how did we (Exchange) tell you to guard against them? By using something called ISA (Internet Security and Acceleration – which is an odd name for what it was, a firewall). ISA, amongst other things, did pre-authentication of connections. It forced users to authenticate to it, so it could then allow only authenticated users access to Exchange. It essentially stopped anonymous users getting to Windows and Exchange. Which was good for Windows and Exchange, because there were all kinds of things that they could do if they got there anonymously. However once authenticated users got access, they too could still do those bad things if they chose to. And so of course could anyone not coming through ISA, such as internal users. So why would you use ISA? It was so that you would know who these external users were wouldn’t you? But do you really think that’s true? Do you think most customers a) noticed something bad was going on and b) trawled logs to find out who it was who did it? No, they didn’t. So it was a bit like an insurance policy. You bought it, you knew you had it, you didn’t really check to see if it covers what you were doing until you needed it, and by then, it was too late, you found out your policy didn’t cover that scenario and you were in the deep doo doo. Insurance alone is not enough. If you put any security device in front of anything, it doesn’t mean you can or should just walk away and call it secure. So at around the same time as we were telling customers to use ISA, back in the 2000 days, the whole millennium bug thing was over, and the proliferation of the PC, and the Internet was continuing to expand. This is a very nice write up on the Microsoft view of the world. Those industry changes ultimately resulted in something we called Trustworthy Computing. Which was all about changing the way we develop software – “The data our software and services store on behalf of our customers should be protected from harm and used or modified only in appropriate ways. Security models should be easy for developers to understand and build into their applications.” There was also the Secure Windows Initiative. And the Security Development Lifecycle. And many other three letter acronyms I’m sure, because whatever it was you did, it needed a good TLA. We made a lot of progress over those ten years since then. We delivered on the goal that the security of the application can be better managed inside the OS and the application rather than at the network layer. But of course most people still seem to think of security as being mainly at the network layer, so think for a moment about what your hardware/software/appliance based firewall does today. It allows connections from a destination, on some configurable protocol/port, to a configured destination protocol/port. If you have a load balancer, and you configure it to allow inbound connections to an IP on its external interface, to TCP 443 specifically, telling it to ignore everything else, and it takes those packets and forward them to your Exchange servers, is that not the same thing as a firewall? Your load balancer is a packet filtering firewall. Don’t tell your load balancing vendor that, they might want to charge you extra for it, but it is. And when you couple that packet level filtering firewall/load balancer with software behind it that has been hardened for 10 years against attacks, you have a pretty darn secure setup. And that is the point. If you hang one leg of your load balancer on the Internet, and one leg on your LAN, and you operate a secure and well managed Windows/Exchange Server – you have a more secure environment than you think. Adding pre-authentication and layers of networking complexity in front of that buys you very little extra, if anything. So let’s apply this directly to Exchange, and try and offer you some advice from all of this. What should YOU do? The first thing to realize is that you now have a CHOICE. And the real goal of this post is to help you make an INFORMED choice. If you understand the risks, and know what you can and cannot do to mitigate them, you can make better decisions. Do I think everyone should throw out that TMG box they have today and go firewall commando? No. not at all. I think they should evaluate what it does for them, and, if they need it going forward. If they do that, and decide they still want pre-auth, then find something that can do it, when the time to replace TMG comes. You could consider it a sliding scale, of choice. Something like this perhaps; So this illustrated that there are some options and choices; Just use a load balancer – as discussed previously, a load balancer only allowing in specified traffic, is a packet filtering firewall. You can’t just put it there and leave it though, you need to make sure you keep it up to date, your servers up to date and possibly employ some form of IDS solution to tell you if there’s a problem. This is what Office 365 does. TMG/UAG – at the other end of the scale are the old school ‘application level’ firewall products. Microsoft has stopped selling TMG, but as I said earlier, that doesn’t mean you can’t use it if you already have it, and it doesn’t stop you using it if you buy an appliance with it embedded. In the middle of these two extremes (though ARR is further to the left of the spectrum as shown in the diagram) are some other options. Some load balancing vendors offer pre-authentication modules, if you absolutely must have pre-auth (but again, really… you should question the reason), some use LDAP, some require domain joining the appliance and using Kerberos Constrained Delegation, and Microsoft has two options here too. The first, (and favored by pirates the world over) is Application Request Routing, or ARR! for short. ARR! (the ! is my own addition, marketing didn’t add that to the acronym but if marketing were run by pirates, they would have) “is a proxy based routing module that forwards HTTP requests to application servers based on HTTP headers and server variables, and load balance algorithms” – read about it here, and in the series of blog posts we’ll be posting here in the not too distant future. It is a reverse proxy. It does not do pre-authentication, but it does let you put a non-domain joined machine in front of Exchange to terminate the SSL, if your 1990’s style security policy absolutely requires it, ARR is an option. The second is WAP. Another TLA. Recently announced at TechEd 2013 in New Orleans is the upcoming Windows Server 2012 R2 feature – Web Application Proxy. A Windows 2012 feature that is focused on browser and device based access and with strong ADFS support and WAP is the direction the Windows team are investing in these days. It can currently offer pre-authentication for OWA access, but not for Outlook Anywhere or ActiveSync. See a video of the TechEd session here (the US session) and here (the Europe session). Of course all this does raise some tough questions. So let’s try and answer a few of those; Q: I hear what you are saying, but Windows is totally insecure, my security guy told me so. A: Yes, he’s right. Well he was right, in the yesteryear world in which he formed that opinion. But times have changed, and when was the last time he verified that belief? Is it still true? Do things change in this industry? Q: My security guy says Microsoft keeps releasing security patches and surely that’s a sign that their software is full of holes? A: Or is the opposite true? All software has the potential for bugs and exploits, and not telling customers about risks, or releasing patches for issues discovered is negligent. Microsoft takes the view that informed customers are safer customers, and making vulnerabilities and mitigations known is the best way of protecting against them. Q: My security guy says he can’t keep up with the patches and so he wants to make the server ‘secure’ and then leave it alone. Is that a good idea? A: No. It’s not (I hope) what he does with his routers and hardware based firewalls is it? Software is a point in time piece of code. Security software guards against exploits and attacks it knows of today. What about tomorrow? None of us are saying Windows, or any other vendor’s solution is secure forever, which is why a well-managed and secure network keeps machines monitored and patched. If he does not patch other devices in the chain, overall security is compromised. Patches are the reality of life today, and they are the way we keep up with the bad guys. Q: My security guy says his hardware based firewall appliance is much more secure than any Windows box. A: Sure. Right up to the point at which that device has a vulnerability exposed. Any security device is only as secure as the code that was written to counter the threats known at that time. After that, then it’s all the same, they can all be exploited. Q: My security guy says I can’t have traffic going all the way through his 2 layers of DMZ and multitude of devices, because it is policy. It is more secure if it gets terminated and inspected at every level. A: Policy. I love it when I hear that. Who made the policy? And when? Was it a few years back? Have the business requirements changed since then? Have the risks they saw back then changed any? Sure, they have, but rarely does the policy get updated. It’s very hard to change the entire architecture for Exchange, but I think it’s fair to question the policy. If they must have multiple layers, for whatever perceived benefit that gives (ask them what it really does, and how they know when a layer has been breached), there are ways to do that, but one could argue that more layers doesn’t necessarily make it better, it just makes it harder. Harder to monitor, and to manage. Q: My security guy says if I don’t allow access from outside except through a VPN, we are more secure. A: But every client who connects via a VPN adds one more gateway/endpoint to the network don’t they? And they have access to everything on the network rather than just to a single port/protocol. How is that necessarily more secure? Plus, how many users like VPN’s? Does making it harder to connect and get email, so people can do their job, make them more productive? No, it usually means they might do less work as they cannot bothered to input a little code, just so they can check email. Q: My security guy says if we allow users to authenticate from the Internet to Exchange then we will be exposed to an account lockout Denial of Service (DoS). A: Yes, he’s right. Well, he’s right only because account lockout policies are being used, something we’ve been advising against for years, as they invite account lockout DoS’s. These days, users typically have their SMTP address set to equal their User Principal Name (UPN) so they can log on with (what they think is) their email address. If you know someone’s email address, you know their account logon name. Is that a problem? Well, only if you use account lockout policies rather than using strong password/phrases and monitoring. That’s what we have been telling people for years. But many security people feel that account lockouts are their first line of defense against dictionary attacks trying to steal passwords. In fact, you could also argue that a bad guy trying out passwords and getting locked out now knows the account he’s trying is valid… Note the common theme in these questions is obviously – “the security g uy said…..”. And it’s not that I have it in for security guys generally speaking, but given they are the people who ask these questions, and in my experience some of them think their job is to secure access by preventing access. If you can’t get to it, it must be safe right? Wrong. Their job is to secure the business requirements. Or put another way, to allow their business to do their work, securely. After all, most businesses are not in the business of security. They make pencils. Or cupcakes. Or do something else. And is the job of the security folks working at those companies to help them make pencils, or cupcakes, securely, and not to stop them from doing those things? So there you go, you have choices. What should you choose? I’m fine with you choosing any of them, but only if you choose the one that meets your needs, based on your comfort with risk, based on your operational level of skill, and based on your budget. Greg Taylor Principal Program Manager Lead Exchange Customer Adoption Team83KViews1like41CommentsControlling Outlook for Android data on wearable devices
Outlook for Android supports Android and Tizen wearable technology. When the Outlook app is installed on the wearable, the user can receive message notifications and event reminders, interact with messages, and view daily calendars.14KViews1like0Comments