Mobile Device and App Management
10 TopicsMicrosoft Intune introduces MDM Security Baselines to secure the modern workplace
A security baseline is a group of Microsoft-recommended configuration settings that explains their security impact. Industry-standard configuration that is broadly known and well-tested, such as Microsoft security baselines, increases efficiency and reduces costs compared to creating them all by yourself. These settings are continually updated with feedback from Microsoft security engineering teams, product groups, partners, and real-world learning from thousands of customers. Microsoft Intune security baselines provide intelligent recommendations that are relevant to the needs of your business, based on your IT infrastructure.68KViews7likes14CommentsHow does Microsoft Intune transform Android enterprise management? Let me count the ways
Android Enterprise replaces legacy Device Administration API (referred to as “device admin” in this article) to provide enhanced privacy, security, and management capabilities for company-owned and bring-your-own devices alike. Microsoft supports the Google recommendation that all partners and customers move off of device admin. In this article, we explore the flexible paths that Microsoft Intune customers may choose to plan their Android management.59KViews8likes8CommentsMicrosoft Edge on iOS and Android now supports conditional access and single sign-on
Microsoft Enterprise Mobility + Security (EMS) is excited to deliver Azure Active Directory conditional access protection for Microsoft Edge on iOS and Android. This integration expands the Microsoft Intune management capabilities as you deploy Microsoft Edge for the best browsing experience across all endpoints in the enterprise. Users get easy, secure access to Office 365 and all your web apps that use Azure Active Directory, with the same application management and security capabilities that previously required Intune Managed Browser.57KViews3likes4CommentsEvolution of macOS management capabilities in Microsoft Intune
Back in 2015 I wrote a blog about Mac management with Intune, however it’s been a few years and I feel it’s time we re-visit Mac management with Intune to learn more about what’s changed. You’ll soon learn there’s been a significant amount of progress and since my first post Intune now has a lot of native Mac management capabilities built in. First let’s look at MacOS enrollment options with Intune. MacOS enrollment options There are two methods to enroll MacOS with Intune, user driven or using Device Enrollment Program. User driven enrollment For user driven enrollment the end user will need to sign into the web based version of the company portal via https://portal.manage.microsoft.com If the user already had a device registered it will show on the screen, if the Mac is the first device being enrolled, they will see the following: Once the user selects “Add this one by tapping here” they’ll be prompted to download the Intune Company Portal app. After the Company Portal is downloaded and installed, open it up and you’ll be asked to sign-in using your corporate credentials. These are the same credentials used to sign into Office 365 (derived from Azure AD). After sign-in is complete the device will begin the enrollment process. For more details on user driven Mac enrollment please visit: https://docs.microsoft.com/en-us/intune-user-help/enroll-your-device-in-intune-macos-cp Apple Device Enrollment Program The concept of the Apple DEP is to associate devices with an organization and to streamline the enrollment process, similar to enrolling Apple iOS devices. However, enrollment requires a different process by associating an Apple enrollment token with Intune. After the enrollment token is added and enrollment profile is created in Intune and associated with the enrollment token. During the enrollment profile creation process you’ll be asked to select user affinity (i.e. userless or user associated). Once user affinity is selected, you’ll also select whether or not you’ll allow users to remove the enrollment profile via the “Locked enrollment” setting. Finally, you’ll customize the setup assistance which allows for hiding setup screen, e.g. Apple Pay, Siri, Registration, etc. For more details on the Apple enrollment token process with Intune please visit: https://docs.microsoft.com/en-us/intune/device-enrollment-program-enroll-macos Conditional access An exciting feature of Azure AD is the ability to target certain device platforms (e.g. MacOS) and set a series of conditions for access by creating conditional access policies in Azure AD. Compliance Azure AD and Intune compliance policies also play a role in access. Step through the compliance policies below to view the restrictions that may be enabled for the device to be compliant. Device Health System integrity protection prevents malicious apps from modifying protected files and folders. Device Properties Specify which OS version and builds you’ll allow before accessing corporate resources. System Security Configured password and password integrity, storage encryption, firewall, and gatekeeper to project against malware. Actions to take for non-compliance Take action when devices are not compliant with the compliance policy by sending the user a mail and/or locking the device. Associating an Intune compliance policy with Azure AD conditional access policy Create an Azure AD conditional access policy to require the device be compliant to access corporate resources. Looking at device configuration for MacOS there are a number of settings, and in my opinion, those settings address a lot of organizations requirements for Apple Mac management. Device features Device restrictions Endpoint protection Looking to protect the device further by configuring the firewall and controlling where apps are installed from? Gatekeep will help with those requirements. Further configure firewall settings to device what you’ll allow in and which apps are allowed and/or blocked. Certificates Intune supports PKCS certificates for general and S/MIME purposes. Device and user-based certificates are both supported via SCEP VPN Many VPN settings are available including 3rd party VPN support. Make note of On-demand and per-app VPN Use a proxy server? No problem! Wi-Fi Both Basic and Enterprise Wi-Fi profiles are supported with various auth types. Customize with Apple Configurator Don’t see a setting in the UI, not to worry as you can create a custom profile using Apple Profile Manager and/or Apple Configurator and upload the payload for delivery through Intune. App deployment Both line of business and Office apps are supported right from the UI. When selecting “Line-of-business app” the MacOS app must be wrapped using the app wrapping tool for Mac which will wrap the app and give it an extension of .intuneMac. The tool is available on GitHub: https://github.com/msintuneappsdk/intune-app-wrapping-tool-mac To learn more about Mac app deployment with Intune please visit: https://docs.microsoft.com/en-us/intune/lob-apps-macos One of my peers Scott Duffey @Scottduf has a great post on this topic: https://blogs.technet.microsoft.com/microscott/deploying-apps-to-macs-using-microsoft-intune/ Note: as of this post only .pkg files are supported nor are conversions from .dmg to .pkg Microsoft + Jamf partnership Microsoft has also has a partnership with Jamf. Jamf also provides MacOS management and if your organization currently utilizes Jamf and would like to receive the benefits of integrating Jamf with Intune you can do this today with Jamf Pro. So, what does this mean? MacOS devices managed by Jamf remain managed by Jamf when Intune comes into the picture (thus are only registered with Intune not enrolled) and integrating Jamf Pro with Intune provides a path for Jamf to send signals in the form of inventory to Intune. Intune will use compliance policies to evaluate the Jamf signals and in turn send signals over to Azure AD stating whether the device is compliant or not. The Azure AD conditional access policy will kick in and based on your configuration of the conditional access policy, will either block or further challenge the user to remediate before access company resources. For more details about Intune and Jamf integration please visit: https://docs.microsoft.com/en-us/intune/conditional-access-integrate-jamf Jamf also has a whitepaper about Intune integration: https://www.jamf.com/resources/technical-papers/integrating-with-microsoft-intune-to-enforce-compliance-on-macs/ That’s it for now, however Microsoft is always releasing updates for Intune. Check back monthly with What’s new in Microsoft Intune and be sure to check which Intune features are under development by visiting: https://docs.microsoft.com/en-us/intune/in-development Article re-posted from https://uem4all.com/2019/03/11/intune-macos-management/34KViews4likes4CommentsMicrosoft and Zscaler enhance the remote user experience for iOS users
Microsoft and Zscaler expand their security association with new capabilities to provision the Zscaler app from within the Intune console, and configure connections by using the existing Intune VPN profile workflow. Customers using the Zscaler Application on Intune-managed iOS devices to establish VPN connections can now enjoy conditional access and seamless app configuration.32KViews3likes1CommentMicrosoft Intune extends ruggedized Android devices support with Zebra
Microsoft Intune is pleased to announce partnership with Zebra Technologies, a leading manufacturer of ruggedized devices used by several industries such as retail, healthcare, manufacturing, logistics, and more. Microsoft Intune will support management of Zebra ruggedized Android devices, starting with support for devices managed using Android device administrator mode, and adding support for Android Enterprise management later this year.27KViews3likes1CommentPart 3: Intune’s Journey to a Highly Scalable Globally Distributed Cloud Service
Over the last couple months I’ve been writing about Intune’s journey to become a globally scaled cloud service running on Azure. I’m treating this as Part 3 (here’s Part 1 and Part 2) of a 4-part series. Today, I’ll explain how we were able to make such dramatic improvements to our SLA’s, scale, performance, and engineering agility. I think the things we learned while doing this can apply to any engineering team building a cloud service. Last time, I noted the three major things we learned during the development process: Every data move that copies or moves data from one location to another must have data integrity checks to make sure that the copied data is consistent with the source data. We discovered that there are a variety of efficient/intelligent ways to achieve this without requiring an excessive amount of time or memory. It is a very bad idea to try building your own database for these purposes (No-SQL or SQL, etc), unless you are already in the database business. It’s far better to over-provision than over-optimize. In our case, because we set our orange line thresholds low, we had sufficient time to react and re-architect. After we rolled out our new architecture, we focused on evolving and optimizing our services/resources and improving agility. We came up with 4 groups of goals to evolve quickly and at high quality: Availability/SLAs Scale Performance Engineering agility Here’s how we did it: #1: Availability/SLAs The overarching goal we defined for availability/SLA (strictly speaking, SLO) was to achieve 4+ 9’s for all our Intune services. Before we started the entire process describe by this blog series, less than 25% of our services were running at 4+ 9’s, and 90% were running at 3+ 9’s. Clearly something needed to change. First, a carefully selected group of engineers began a systematic review of where we needed to drive SLA improvements across the 150+ services. Based on what we learned here, we saw, over the next six months, dramatic improvements. This review uncovered a variety of hidden issues and the fixes we rolled out made a huge difference. Here are a few of the big ones: Retries: Our infrastructure supported a rudimentary form of retries and it needed some additional technical sophistication, specifically in terms of customized request timeouts. Initially, there was no way to cancel a request if it took more than a specified set time for a specific service. This meant that a request could never really be retried, because if a timeout happened, it most likely exceeded the threshold for the end-end operation. To address this, we added a request timeout feature that enabled services to specify custom limits on the maximum time a request can take before being canceled. This allowed services to specify appropriate time limits and give them several other retry semantics (such as backoffs, etc.) within the bounds of the overall end-end operation. This was a huge improvement and it reduced our end-end timeouts by more than half. Circuit breakers: It didn’t take long for us to realize that retries can cause a retry storm and result in timeouts becoming much worse. We added a circuit breaker pattern to handle this. Caching: We started caching responses for repeated requests that matched the same criteria without breaking security boundaries. Threading: During cold starts and request spikes, we noticed that the underlying framework (.NET) took time to spin off threads. To address this, we adjusted the minimum worker threads a service needs to maintain to account for these behaviors and made it configurable on a per-service basis. This almost eliminated all the timeouts that happened during these spikes and/or cold starts. Intelligent routing: This was a learning algorithm that determined the target service that had the best chance to succeed the request. This kind of routing avoided a hung or slow node, a deadlocked process, a slow network VM, and any other random issues experienced by the services. In a distributed cloud service operating at scale, these kinds of underlying issues must be expected and are more of a norm than an exception. This ended up being a critical feature for us to design and implement, and it made a huge difference across the board, especially when it came to reducing tail latencies. Customized configurations: Each of our services had slightly different requirement or behavior, and it was important for us to provide knobs to customize certain settings for optimal behavior. Examples of such customized settings included: http server queue lengths, service point count, max pending accepts, etc. The result of all the above efforts was phenomenal. The chart below demonstrates this dramatic improvement after the changes were rolled out. You’ll notice that we started with less than 25% of services at 4+ 9’s, and by the time we rolled out all the changes, 95% or more of our services were running at 4+ 9’s! Today, Intune maintains 4+ 9’s for over 95% of our services across all our clusters around the world. #2: Scale The re-architecture process enabled us to primarily use scale out of the cluster to handle our growth. It was clear that the growth we were experiencing required us to additionally optimize in scale-up improvements. The biggest workload for Intune is triggered when a device checks-in to the service in order to receive policies, settings, apps, etc. – and we chose this workload as our first target. The scale target goal we set was 50k devices checking in within a short period (approximately 10 minutes) for a given cluster. For reference, at the time we set this goal, our scale was at 3k devices in a 10-minute window for an individual cluster – in other words our scale had to increase by about 17x. As with the SLA work we did, a group of engineers pursued this effort and acted as a single unit to tackle the problem. Some of the issues they identified and improved included: Batching: Some of the calls were made in a sequential manner and we identified a way for these calls to be batched together and sent in one request. This avoided multiple round trips and serialization/deserialization costs. Service Instance Count: Some of the critical services in our cluster were running with an instance count. We realized that these were the first bottlenecks that prevented us from scaling up. By simply increasing the instance count of the services without changing the node or cluster sizes we completely eliminated these bottlenecks. Caching: Some of the properties in an account/tenant or user were frequently accessed. These properties were accessed by various different calls to the service(s) which held this data. We realized that we can cache these properties in the token that a request carried. This eliminated the need for many calls to other services and the latencies or resource consumptions associated with them. Reduce Calls: We developed several ways to reduce calls from one service to another. For example, we used a Bloom Filter to determine if a change happened, and then we used that information to reduce a load of about 1 million calls to approximately 10k Leverage SLA improvements: We leveraged many of the improvements called out in the SLA section above, even though both efforts were operating (more or less) in parallel at the time. We also leveraged the customized configurations to experiment, learn, and test. By the end of this exercise, we were successfully able to increase the scale from 3k devices checking-in to 70k+ device check-ins – an increase of more than 23x -- and we did this without scaling out the cluster! #3: Performance Our goal for performance had a very specific target: Culture change. We wanted to ensure that our performance was continuously evaluated in production and we wanted to be able to catch performance regressions before releasing to production. To do this, we first used Azure profiler and associated flame graphs to perform continuous profiling in production. This process showed our engineers how to drive several key improvements, and subsequently, it became a powerful daily tool for the engineers to determine bottlenecks in code, inefficiencies, high CPU usage, etc. Some of the improvements identified by the engineering team as a result of this continuous profiling include: Blocking Calls: Some of the calls made from one service to another were incorrectly blocking instead of following async patterns. We fixed this by removing the blocking calls and making it asynchronous. They resulted in reduced timeouts and thread pool exhaustions. Locking: Another pattern we noticed using the profiler was lock contention between threads. We were clearly able to examine these via code that used the profiler’s call stacks to fix the bugs and remove the associated latencies. High CPU: There were numerous instances where we were easily able to catch high CPU situations using the profiles and quickly determine root causes and fixes. Tail latency: While investigating certain latencies associated with devices checking in or our portal flows, we noticed that some of the search requests were being sent across to all the partitions of a service. In many cases, there is just one partition that holds this data and the search can be performed against that single partition instead of fanning out across all of them. We successfully made optimizations to do a search directly against the partition that held the data – and the result was a drop in latency from 200 msec to less than 15 msec (see chart below). The end result was improved response times in devices checking in and faster data retrievals in our ITPro portal. Our next action was to start a benchmark service that consistently and constantly ran high-traffic in our pre-production environments. Our goal here was to catch performance regressions. We also began running a consistent traffic load (that is equivalent to production loads) across all services in our pre-production environments. We made a practice of considering a drop in our pre-production environment as a major blocker for production releases. Together, both of these actions become a norm in the engineering organization, and we are proud of this positive culture change in meeting the performance goal. #4: Engineering Agility As called out in the first post in this series, Intune is composed of many independent and decoupled Service Fabric services. The development and deployment of these services, however, are genuinely monolithic in nature. They deploy as a single unit, and all services are developed in a single large repo – essentially, a monolith. This setup was an intentional decision when we started our modern service journey because a large portion of the team was focusing on the re-architecture effort and our cloud engineering maturity was not yet fully realized. For these reasons we chose simplicity over agility. As we dramatically developed the feature investments we were making (both in terms of the number of features and the number of engineers working them), we started experiencing agility issues. The solution was decoupling the services in the monolith from development, deployment, and maintenance perspectives. To do this we set three primary goals for improving agility: Building a service should complete within minutes (this was down from 7+ hrs) Pull requests should complete in minutes (down from 1+ day) Deployments to our pre-prod environments should occur several times per day (down from once or twice per week) As indicated above, our agility was initially hurting us when it came to rapidly delivering features. Pull requests (PR) would sometimes take days to complete due to the aforementioned monolithic nature of the build environments – this meant that any change anywhere by anyone in Intune would impact everyone’s PR. On any given day, the churn was so high that it was extremely hard to get stable builds and fast builds or PRs. This, in turn, impacted our ability to deploy this massive build to our internal dogfood environments. In the best case, we were able to deploy once or twice per week. This, obviously, was not something we wanted to sustain. We made an investment in developing and decoupling the monolithic services and improve our agility. Over a period of 2+ years, we invested two major improvements: Move to individual GIT repos: Services moved from a proprietary source depot monolith branch to their own individual GIT repos. This decoupled development, PRs, unit and component tests, and builds. The change resulted in build times getting completed in around 30 minutes – a huge difference from the previous 7-8 hours or more. Carve out of Micro Services from Monolith: Services were carved out of the Service Fabric application and packaged into their own application, and they were turned into their own independent deployable unit. We referred to such an application as a micro service. As this investment progressed and evolved, we started seeing huge benefits. The following demonstrate some of these: Build/PR Times: For microservices, we reduced the time that a service typically completes a build to within 30 minutes from the previous 7+ hours. Similarly, the monolith saw an improvement to 2-3 hours from the 7 hours. A similar improvement happened in PR times as well, to a few minutes for micro services (from 1+ day). Deployments to Pre-prod Dogfood Environments: With the monolith, successful deployments to pre-production dogfood environments would take us minimum of 1 day and, in some extreme cases, up to a week. With the investments above, we are now able to complete several deployments per day across the monolith and micro services. This is primarily because of faster deployment times (due to the parallel deployments of micro services) and the number/volume of services that have been removed from the monolith into their own micro services. The chart below demonstrates one such an example. The black line shows that we went from single digits to 1000’s of deployments per month in production environments. In pre-production dogfood environments, this was even higher – typically reaching 10’s of deployments per day across all the services in a single cluster. Challenges: Today, Intune is part monolith and part micro services. Eventually, we expect to compose 40-50 micro services from the existing monolith. There are challenges in managing micro services due to the way they are independently created and managed and we are developing tooling to address some of the micro service management issues. For example, binary dependencies between micro services is an issue because of versioning issues. To address this, we developed a dependency tool to identify conflicting or missing binary dependencies between micro services. Automation is also important if a critical fix needs to be rolled out across all micro services in order to mitigate a common library issue. Without proper tooling, it can also be very hard and time consuming to propagate the fix to all micro services. Similarly, we are developing tooling to determine all the resources required by a micro service, as well as all the resource management aspects, such as key rotation, expiration, etc. Learnings There were 3 learnings from this experience that are applicable to any large-scale cloud service: It is critically important to set realistic and achievable goals for SLA and scale – and then be persistent and diligent in driving towards achieving these goals. The best outcomes happen when a set of engineers from across the org work together as a unit towards a common goal. Once you have this in place, make incremental changes; the cumulative effect of all the small changes pays significant dividends over time. Continuous profiling is a critical element of cloud service performance. It helps in reducing resource consumption, tail latencies, and it indirectly benefits all runtime aspects of a service. Micro services help in improving agility. Proper tooling to handle patches, deployments, dependencies, and resource management are critical to deploy and operate micro services in a high-scale distributed cloud service. Conclusion The improvements that came about from this stage of our cloud journey have been incredibly encouraging, and we are proud of operating our services with high SLA and performance while also rapidly increasing the scale of our traffic and services. The next stage of our evolution will be covered in Part-4 of this series: A look at our efforts to make the Intune service even more reliable and efficient by ensuring that the rollout of new features produce minimal-to-no impact to existing feature usage by customers – all while continuing to improve our engineering agility.26KViews8likes4CommentsMicrosoft Intune announces support for iOS 12 and macOS Mojave (10.14)
Microsoft Intune is pleased to announce support for iOS 12 and macOS Mojave updates released earlier today by Apple. Review the Intune service support blog for other tips and recommendations that may be relevant during this change.25KViews1like1CommentIntroducing Remote Autopilot Reset in Intune for Education
The Intune for Education team is excited about the recently released Remote Autopilot Reset feature. This new functionality allows your school IT admin to reset devices from the Intune for Education console, hands free.20KViews4likes1Comment