Limiting eDiscovery results to specific folders only

In this month’s article for ENow’s Solutions Engine Blog, I’m doing a quick review of the recently introduced “targeted collection” feature in Office 365, one that allows you to limit the results when performing an eDiscovery or Content Search to specific folders only.

This has been one of the most common asks for years, especially in the Exchange world where administrators are used to the convenience of the Search-Mailbox cmdlets to perform search and destroy operations against malware or simply need to copy some messages. Unfortunately, the new keywords that make this feature possible are not supported by the Search-Mailbox cmdlets, so the only way to use it is to head to the Security and Compliance Center.

To learn more about the feature, consult the documentation or head to the full article here: http://blog.enowsoftware.com/solutions-engine/performing-ediscovery-against-a-specific-folder

Posted in Office 365 | Leave a comment

Clearing AIP client and PowerShell module token cache

The question on how to “log out” of the Azure Information Protection client or the corresponding Office add-in is one that seems to pop up often. The AIP team has actually published information on how to achieve this task in the following article. In a nutshell, in order to reset authentincation you have to delete the TokenCache value under HKEY_CURRENT_USER\SOFTWARE\Microsoft\MSIP or delete the TokenCache file under %localappdata%\Microsoft\MSIP.

In addition, the team has also started gathering feedback on the importance of support for multiple accounts, much like we’ve had for a while now with “pure” RMS in Office. Make sure to vote for the corresponding item on UserVoice and also leave your feedback there!

Now, the above doesn’t cover the AzureInformationProtection PowerShell module, which is another very useful tool. While the module allows for non-interactive mode, by using service principal credentials to execute any operations, it also can be used interactively. Once you provide credentials however, there is no way to actually log out or change the logged in user, and it will persist even across restarts, until the token has expired.

So, in case you want to log out of the module or change the logged in user, you have to again resort to manual actions. The steps are actually similar to the ones above regarding the AIP client, however both the registry key and the file are in different location. Anyway, without further ado, to remove the token and force the module to ask for credentials:

  • Start regedit
  • Navigate to the following key: HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\MSIPC\pscmdlet
  • Locate the subkey corresponding to the currently used tenant (either compare the GUID or simply expand the subkeys to check the corresponding user Identity)
  • Once you’ve located the relevant key, delete it
  • Also delete the file storing the token from %LocalAppData%\Microsoft\MSIPC\pscmdlet\Auth (should not be necessary, but just in case)
  • Run any AIP related cmdlet, such as Get-RMSTemplate, and provide the new set of credentials.

The above steps are not really supported by Microsoft, so use at your own risk!

 

Posted in Office 365, PowerShell | Leave a comment

New version of the AzureAD PowerShell (Preview) module released, brings support for Groups lifecycle policies

In case you missed it, a new version of the AzureADPreview PowerShell module has been released yesterday, namely version 2.0.0.137. This new version brings support for controlling Office 365 Groups lifecycle policies, by means of the following cmdlets:

Add-AzureADMSLifecyclePolicyGroup
Get-AzureADMSGroupLifecyclePolicy
Get-AzureADMSLifecyclePolicyGroup
New-AzureADMSGroupLifecyclePolicy
Remove-AzureADMSGroupLifecyclePolicy
Remove-AzureADMSLifecyclePolicyGroup
Reset-AzureADMSLifeCycleGroup
Set-AzureADMSGroupLifecyclePolicy

The cmdlets help is still not available online, but you can use Get-Help to check the syntax and examples. Overall, the cmdlets are easy to use, though unfortunately they too suffer from the now all familiar ObjectID dependence.

In order to create a new policy, you can use the New-AzureADMSGroupLifecyclePolicy cmdlet. Only a single policy is supported in the tenant and when creating it you need to decide whether it applies to All Groups, or Selected ones. You also need to specify the lifetime duration for a Group, for example 10 years, as well as the contact that will be receiving the notifications in addition to any Group owners. Here’s an example:

New-AzureADMSGroupLifecyclePolicy -GroupLifetimeInDays 3650 -ManagedGroupTypes Selected -AlternateNotificationEmails: user@domain.com

Id                                   GroupLifetimeInDays ManagedGroupTypes AlternateNotificationEmails
--                                   ------------------- ----------------- ---------------------------
97763682-e547-4c4a-8d03-25d9d5f777a6                3650 Selected          user@domain.com

Once the policy is created, you can assign it to specific Groups via the Add-AzureADMSLifecyclePolicyGroup cmdlet:

Add-AzureADMSLifecyclePolicyGroup -GroupId (Get-AzureADMSGroup -SearchString default).Id -Id (Get-AzureADMSGroupLifecyclePolicy).ID
True

To check what policy, if any, is assigned to a Group, use the Get-AzureADMSLifecyclePolicyGroup cmdlet:

Get-AzureADMSLifecyclePolicyGroup -Id (Get-AzureADMSGroup -SearchString default).Id | fl

Id                          : 97763682-e547-4c4a-8d03-25d9d5f777a6
GroupLifetimeInDays         : 3650
ManagedGroupTypes           : Selected
AlternateNotificationEmails : user@domain.com

If you want to learn more about the Groups lifecycle policy, including how to set it up via the Azure portal, check out Tony’s article here: https://www.petri.com/group-expiration-policy-preview

 

Posted in Azure AD, Office 365, PowerShell | Leave a comment

The issue that shouldn’t have been #1 – Office digital signatures

Introducing a new series

There is no doubt that we live in interesting times as the world of IT changes with faster pace than ever. Cloud, DevOps, continuous integration/deployment – it’s all so exciting. When it works. There is no denying that it does work most of the time, at least in Microsoft land. Unfortunately, it’s not that uncommon to have a code change that brings undesired results hit production environments, and when that happens in a service of the scale of Office 365, things can get really ugly. One such example was the recent issue that exposed customer’s data in the Office 365 Admin Center Reports: https://www.petri.com/data-breach-office-365-admin-center

Sadly, that’s just one of many examples, and certainly not a precedent. In fact, so many “easy to spot” issues have been made it to different parts of the service, the Office suite, desktop, mobile and server OSes, that at times it makes you wonder whether Microsoft engineers perform any actual QA testing anymore. We might be using “testing in production” as a joke, but when something like the aforementioned incident happens, it can have very serious repercussions. But that’s not the point of this article (soon to be series)!

The point is, it is important that we as customers (colleagues, experts, MVPs, insert_noun_here) keep Microsoft (and other companies) in check, and keep them honest. As a big proponent of communication “openness”, at times I do not feel like that some of these issues are handled properly by Microsoft, thus I plan to do my part and highlight any newly occurring incidents that are clearly a mistake that should not have made it to production and that could’ve been avoided by proper testing. An “incident log” if you will, or a record of shameful events 🙂

Example #1 – Office applications signed with incorrect certificate

So, for the first article in the series, let’s talk about the shameful incident that pushed incorrectly signed executable files to Office users with the July 27, 2017 updates. This out-of-band release incorporated some very important fixes for security issues with Outlook and ironically, resulted in causing security-related issues. Namely, the executable files delivered as part of this update, such as Outlook.exe or WinWord.exe, have all been signed with Microsoft’s internal, TEST certificate that chained up to the untrusted “Microsoft Testing Root Certificate Authority 2010” root, as shown on the images below:

The issue was immediately caught and reported on the various communities, as it prevented Office applications to run in AppLocker-protected environments, caused AV software (of course, not Windows Defender) and other software that verifies code signatures to report those applications as untrusted and so on. The next day, an update was released that addressed this issue:

Leaving aside any speculations on how such an obvious mistake can make it through all validations rings (which supposedly exist?), what’s even more mind-boggling in this situation, to me at least, is that Microsoft did NOT release an update for other channels. Thus, for most enterprise environments, very few of which will be running on Current channel, the issue is still present and working around it requires to either disable features such as AppLocker or add this internal to Microsoft Root certificate in your Trusted Roots store. Far from ideal, if you ask me.

Now, one can argue that the Deferred channels should only receive security updates and in this case the update is clearly marked as “non-security”, as seen from the above screenshot. The question here is, should an obvious mistake on Microsoft’s side, and one that has serious implications of productivity (we can also argue about some security implications), be allowed to persist for weeks, or even month? I’ll leave the answer to you…

Instead of conclusion

So, there we have it, the first example in what will most likely turn into a series of blog post to cover unfortunate incidents that could’ve been avoided in an ideal world. It’s understandable that Microsoft representatives don’t usually want to talk about such issues, as they can result in some bad exposure. I’d also agree that we are all humans, we all make mistakes and so pointing fingers doesn’t do much good. It is my firm belief however that the majority of Microsoft’s customers can be understanding and forgiving, after all how many of us can even imagine the complexity of running things at such scale? So I’d urge a more open approach to handling such issues. Plus, in general being open about things is always preferred to the alternative of leaving the impression that you are trying to sweep things under the rug.

It is also my belief that such issues should be properly acknowledged and acted upon, so that we as the customers are assured that a lesson has been learned and improvements are planned (even in cases where no word has reached the outside world, which I’m sure also happens). Thus, I reserve my right to annoy people at Microsoft next time I run into an issue we’ve already reported and was supposedly acted upon. And that’s the whole idea behind this article (series) – keep Microsoft honest, make sure they follow their own procedures and best practices, for both our and their benefit!

Posted in Uncategorized | 1 Comment

Office 365 Permissions Inventory scripts vol3

Continuing the Permissions inventory series, I’ve published two more scripts on the TechNet Gallery. Both scripts deal with mailbox folder permissions, in the first case only covering the default Calendar folder.

Both scripts use much of the same code, after all getting the permissions is performed via the Get-MailboxFolderPermission cmdlet. For the Mailbox folder permissions inventory script however, the number of folders covered and thus the number of permission entries gathered is much larger, so some additional optimizations have been made in the code. This includes arrays to specify which folders to cover and which to exclude, as well as a parameter to exclude permission entries from specific users. I’d recommend adjusting those accordingly, especially if you try to run the scripts in large organizations.

Unlike previous scripts in the series, output is written to a CSV file by default – seems I’m the only one that prefers to actually take a look at the output before dumping it to file 🙂 Feedback taken! The scripts will still keep the output in the global $varPermissions variable, in case you want to manipulate it before export. And in case you are having troubles with the session breaking too often, an alternative is provided in order to save the output after each script iteration.

And since we mentioned output – two different variants are included. The default one writes each permission entry to a new line in the CSV file, but you can also use the -CondensedOutput switch to specify that you want to get the “shorter” version, with one line per mailbox folder (or one line per mailbox in the case of the Calendar Permissions script). Examples below, let me know which one you prefer.

Anyway, here are the links to the scripts and more detailed descriptions on the Cogmotive blog site:

Posted in Exchange Online, Office 365, PowerShell | Leave a comment