The LandPhil be honest, be honorable, be kind, be compassionate, and work hard.

September 3, 2015

Firefox, Root Certificates, and you

Filed under: PowerShell — Tags: , , — phil @ 4:41 pm

If you’ve ever dealt with a certificate authority (CA), you may know most of this.  If this is your first foray into it, hold onto your butts.

Many enterprises stand up and run their own certificate authorities to 1) maintain control over certificate issuance, 2) maintain the security of the certificate chain, and 3) not have to pay a public certificate authority to issue certs.  Until recently, you could get certificates for private addresses (10.x.x.x or 192.168.x.x) or private names (my.domain.local) from public certificate authorities.  Now, they won’t issue a certificate unless they can actually verify that the name/address space belongs to you.  So, for many enterprises, the 3rd option above isn’t viable for namespaces behind their firewalls.

The easiest way to set up a CA, is to use Microsoft’s Certificate Authority role that is part of Windows Server.  A handy thing to remember here is that the CA infrastructure is NOT tied to the domain infrastructure.  So, even though it says it’s domain integrated, doesn’t mean you can’t issue certificates for spaces OUTSIDE of the named domain.  (Example: I installed it in my.domain.local, but I want to issue a certificate for notmine.newdomain.local.  I can, and it won’t be a problem.)  The domain integration is just an easy way to publish and distribute the important parts; namely the root certificate and the certificate revocation lists (CRLs).

Once you have your CA set up and domain integrated and your root CA is published, you’d be all set… if the only browsers in your organization are Internet Explorer or Chrome.  See, a web browser use a certificate store to keep all of the root certificates from all of the relevant public authorities.  It also will store any certificates that you want.  In the case of IE and Chrome, they use the OS integrated certificate store that is conveniently updated by Microsoft Update and Active Directory.  In the case of Firefox (and pretty much every other browser), the developers have decided NOT to trust the operating system store and has their own.  This, of course, means that you can’t rely on MS Update or Active Directory to update it with new root certificates.

Doing searching online, and you’ll find a few references on how to overcome this problem with visualbasic scripts and the “certutil.exe” utility that is part of Firefox.  I suggest you go check that out.  You will actually need to jump through the hoops to obtain the certutil.exe utility (specific for Firefox) in order to use the script I’ve included below.  The visualbasic script is fairly basic and has a few caveats and bugs that aren’t readily apparent, so I re-wrote the entire thing in PowerShell for the new generation.

I make no guarantees on it’s execution and with ALL code you download from the internet, it’s best to analyze and test it on your own prior to deployment.

## Import Certificates to Firefox
## Name: import_cert2firefox_public.ps1
## Author: Phillip Cheetham
## Date: 08/26/2015

## For GPO, add script to User Configuration \ Policies \ Windows Settings \ Scripts \ Logon
## Can configure as Powershell script, link to script path; will only work on Windows 7/2008 R2 or later
## Can configure as regular script:
##    Script Name: powershell.exe
##    Script Parameters: -noninteractive -command <script-path\script.ps1>

## Set variables from computer environment
$strTempDir = $env:TEMP
$strAppDataDir = $env:APPDATA
$strFirefoxProfilesDir = $strAppDataDir + "\Mozilla\Firefox\Profiles"

## Set Domain specific variables for installation
## Replace \\servername.domain.local\share\path with the network path to folder containing certutil.exe and certificate files
$strCertutilFolder = "\\servername.domain.local\share\path"
## Replace Local CA with name of local certificate authority.
$strLocalCertificateAuthorityName = "Local CA"
## Replace certificate.crt with the name of the certificate file
## If you need to deploy multiple certificate files, proceed to line 99
$StrCertificateFileName = "certificate.crt"
## Set appropriate trust for the certificate authority by editing "CT,c,C"
## Refer to https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Tools/certutil for more information
$strTrustAttributes = "CT,c,C"

## Do not edit below this line
function IsInstalled
{
    param(
        [Parameter(Mandatory=$true)]
        [string]$ProgramName
        )

    if (($env:PROCESSOR_ARCHITECTURE -eq "AMD64") -or ($env:PROCESSOR_ARCHITECTURE -eq "IA64")) {
        $os64bit = $true
    } else { $os64bit = $false }

    if ($os64bit -eq $true) {
        if (Get-ItemProperty HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Where-Object { $_.DisplayName -match $ProgramName }) {
            $true
            return
        }
        else {
            $false
            return
        }
    }
    else {
        if (Get-ItemProperty HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\* | Where-Object { $_.DisplayName -match $ProgramName }) {
            $true
            return
        }
        else {
            $false
            return
        }
    }
}

if (IsInstalled("Firefox")) {
    if ((Test-Path -Path ($strTempDir + "\FirefoxTools")) -eq $true) {
        # Directory exists, ideally remove directory first to assure published files/certs are current
        Remove-Item -Path ($strTempDir + "\FirefoxTools") -Force -Recurse -ErrorAction SilentlyContinue #attempt to remove before overwriting
        if (New-Item -ItemType Directory ($strTempDir + "\FirefoxTools") -Force) { #if delete fails, force directory overwrite
            Copy-Item ($strCertutilFolder + "\*") ($strTempDir + "\FirefoxTools") -Force #in any case, force file overwrite
        }
        else {
            exit #terminate script execution if directory creation fails
        }
    }
    else {
        # Directory does not exist
        if (New-Item -ItemType Directory ($strTempDir + "\FirefoxTools") -Force) {
            Copy-Item ($strCertutilFolder + "\*") ($strTempDir + "\FirefoxTools") -Force
        }
        else {
            exit #terminate script execution if directory creation fails
        }
    }
    #insert certificates
    $arrFirefoxProfileList = Get-ChildItem $strFirefoxProfilesDir
    foreach ($profile in $arrFirefoxProfileList) {
        ## Backup certdb file
        Copy-Item ($profileDir + "\cert8.db") ($profileDir + "\cert8.db.old") -Force -ErrorAction SilentlyContinue

        ## Execute certificate insertion
        ## Build command line, one section at a time because Invoke-Expression will not process as single line
        ## and '&' command interpreter will not process as single variable
        $execCmd = $strTempDir + "\FirefoxTools\certutil.exe"
        $execCAName = "`'$strLocalCertificateAuthorityName`'" #needs to be encapsulated in single quotes
        $execRootFile = $strTempDir + "\FirefoxTools\" + $strCertificateFileName
        $execAttribs = "`"$strTrustAttributes`"" #needs to be encapsulated in double quotes - VERY IMPORTANT
        $execProfile = $profile.FullName
        & $execCmd -A -n $execCAName -i $execRootFile -t $execAttribs -d $execProfile

        ## To include multiple certificate files, uncomment and copy the lines below as necessary
        ## $strLocalCertificateAuthorityName = "Local CA" #<- Update this line
        ## $execCAName = "`'$strLocalCertificateAuthorityName`'"
        ## $strCertificateFileName = "certificate.crt" #<- Update this line
        ## & $execCmd -A -n $execCAName -i $execRootFile -t $execAttribs -d $execProfile
    }
    Remove-Item -Path ($strTempDir + "\FirefoxTools") -Force -Recurse -ErrorAction SilentlyContinue #remove temp directory
}

import_cert2firefox_public

June 8, 2012

Coding!

Filed under: Uncategorized — phil @ 4:01 pm

So I have to toot my own horn for a bit.

First off, I haven’t really written a program since I was in college.  And that really isn’t a decent real world coding experience anyway.  In school, you’re working in a fairly closed environment in terms of what’s expected and what’s provided.  “Here’s what a data structure is…now go write one.”

I’ve played around with Java, did a little bit of coding around making IRC bots, but nothing terribly exciting.  And, I really haven’t had to solve any problems with code, mostly.  I’ve been working as a systems administrator.  Sure, there’s some scripting, but I wouldn’t consider it the same thing.

Two things changed that for me:  PowerShell and my new job as a SharePoint farm administrator.  PowerShell came out during my last job, and I played with it a bit, but I really didn’t get the awesomeness of it until I started working in SharePoint.  PowerShell is a scripting language, but it’s object oriented and tightly integrated with everything in the Microsoft stack.  Writing scripts in PowerShell is nipping at the edges of .NET programming.  You’re working with the same object models, but in an environment where you can load and unload objects in the structure at your whim.

Because of this, I came to greatly understand the SharePoint object model.  I scripted large swaths of what I was doing in SharePoint.  Farm configuration, web application configuration, site collection creation and permissions.  It even got to the point where I was tinkering around with the portal navigation providers in PowerShell.  (Adding links to the top site navigation and the quick launch, complete with security trimming!  Something that you can’t do in the graphical interface.)

Well, it’s finally come full circle.  Over the last two days I have written, compiled and deployed my first custom event receiver in SharePoint.

First off, why?  Well, there’s a lot of functionality provided by SharePoint.  But Microsoft is in this for the money, so they’ve restricted access to certain functionality based on the “tier” of the application you’re using.  The free version, SharePoint Foundation, does a lot, but at the same time, it’s fairly restrictive.  SharePoint Server Standard adds a lot of fun stuff, but there’s a hefty price to be paid to get there.  Enterprise adds on even more functionality, but aimed at higher business functionality (think business intelligence.)  So, as you might expect, you have to live with some caveats if you’re using SharePoint Foundation…

Unless you decide you can just do it yourself.  Enter my new found (or old remembered) ability to write code.

The past 48 hours have been entertaining.  My solution went from a farm solution to a sandboxed solution (I didn’t like the fact that the features it enabled could be activated in any site in the entire farm, it just made it messy.)  I also had to solve the interesting problem of how to catch and handle errors (my farm solution just wrote to the ULS logs on the web front ends, but sandboxes are restricted from doing that, so I had to come up with a custom solution for that too!  (That is rather ingenious, I might add.  Hunting through logs is messy, so I create and write to a custom list.))

I’m actually pretty jazzed.  Too bad I’ve got enough “administrative work” to sink a ship.  I’d probably keep coding.

 

December 30, 2011

Leave SharePoint Alone!

Filed under: Uncategorized — phil @ 1:06 pm

This is only funny if you’ve ever worked with SharePoint.  Or really any enterprise level Microsoft product.  Or really any enterprise software product…

No, only funny to SharePoint folks: Leave SharePoint Alone

Props to Christian Buckley.

December 29, 2011

SharePoint: Upgrading is a bear

Filed under: Uncategorized — phil @ 6:57 pm

So you’ve patched SharePoint.  And then you’ve run the configuration wizard on all of the farm machines to actually apply the updates.  Or, perhaps you prefer to use the command line psconfig.exe.  In any case, you get the dreaded (and very generic and not at all helpful) “an update conflict has occurred, and you must re-try this action”.

Fair enough, it says you should re-try, so you do.  And you get it again.  Ad infinitum.

Turns out it’s a problem with the configuration cache on that machine.  You need to clear that out.  I found the steps here (http://support.microsoft.com/kb/939308, which is an article for SP 2007, but it worked for me on 2010.)

I’ll summarize them:

  1. Stop the timer service
  2. Go to <systemrootdrive>:\ProgramData\Microsoft\SharePoint\<GUID>
  3. Open the cache.ini file and look at the number.
  4. Backup the folder.
  5. Delete all of the XML documents in the folder.
  6. Edit cache.ini so that it only contains “1”.
  7. Restart the timer service.

When you restart the timer service, it should rebuild the contents of that directory, along with changing the value in the cache.ini file.  You should also then be able to run the upgrade wizard (or psconfig.exe).

 

September 22, 2011

SharePoint: Search gotcha

Filed under: SharePoint — phil @ 9:36 am

So below I detailed the steps I had to take to get SharePoint Foundation Search running on my farm.  It worked great.  It even indexed content I didn’t expect it to index!

On the one of the sites that I am indexing now is a list.  That list contains seemingly innocuous information, but one of the requirements that the group had for that list was that some of the information contained on it was not ‘publicly’ (visitors) viewable.  No problem.  I created a new Display Form for the list, set it as the default and I thought I was done.  Until the crawler got to the site.  See, I didn’t remove the original display form because I thought: 1) I might need it, and 2) I’m not a big fan of removing default content unless I really have to.

You can guess what happened.  I entered form data into the search box and I got back links to the original (no longer default) display form aspx page.  That was no good.  I was hoping that I could somehow restrict access to, or hide the page but, after poking around in SharePoint Designer for a little bit, I decided I’d just delete the file.  The interesting part is that that appeared to break the index for the site and I had to wait for the crawler to run again before searching worked.

Worst case, I mount and recover the file from an SPSite backup that I have.  Though, I can probably also just copy the default display form aspx page from another list.

September 20, 2011

SharePoint: Configuring Foundation Search

Filed under: SharePoint — phil @ 12:08 pm

You’ve got SharePoint installed.  You’re generating content.  Now you’d like it if that little Search bar on the top right actually returned something other than:

  • Your search cannot be completed because this site is not assigned to an indexer.  Contact your administrator for more information.
The first step is to turn on the Foundation Search Service.  You do that with the following:
  1. Open Central Administration
  2. Go to System Settings -> Manage services on server
  3. Click on SharePoint Foundation Search
  4. Assign a service account to start the service (this is a managed account, so select on or add a new one)
  5. Assign a content access account (more on that below)
  6. Enter a database server and database name (or accept the defaults)
  7. Choose an indexing schedule
  8. Click Ok
  9. When you return to the Manage services on server page, click Start next to SharePoint Foundation Search
That’s only the first part, however.  You’ve got the Search service running and the indexer on a schedule, but you actually have not yet identified what to index.  In standard and enterprise version of search, you get a full blown service application and a lot of that configuration takes place there.  However, that’s not the case with foundation search, so let’s continue.

The next thing that you need to do grant the content access account (that you entered above) read access to your applications.  Do this:
  1. Go to Central Administration
  2. Go to Application Management -> Manage web applications
  3. Highlight the web application that you want foundation search to index
  4. Click User Policy in the ribbon
  5. In the Policy for Web Application box that appears, click Add Users
  6. Leave the Zone selection at (All zones), click Next
  7. In the Choose Users box, enter the username of the Content Access Account you used above when you configured SharePoint Foundation Search
  8. Check Full Read for permissions
  9. Click Finish
Alright.  Now you have the indexer running and the content access account has privileges to read all of the content in your web application.  But you still haven’t identified the content to crawl.  That’s done at the content database level. So:
  1. Go to Central Administration
  2. Go to Application Management -> Manage Content Database Settings
  3. Click on the content database that contains the site(s)/site collection(s) that you want to index
  4. In the settings for that content database, in the section Search Server, use the drop down list to select the server with SharePoint Foundation Search service running
  5. Click OK.
Viola!  You have turned on and set a schedule for indexing, granted permissions for the crawl account, and identified content to crawl.  More than likely, you’re done.  Based on the schedule that you created in the first part, and the amount of data that you need to crawl, you may need to wait a little while to check to see if everything is working.

In my case, it did not.  The search box continually returned “no results” for anything that I typed in.  So I turned to the Application Event Log and found this:
Log Name: Application
Source: SharePoint Foundation Search
Logged: <datetime>
Event ID: 14
Task Category: Gatherer
Level: Warning
User: <search service account>
Computer: <sharepoint server>
OpCode: Info
General:
The start address sts4s://<website address>/contentdbid={<guid of content db>} cannot be crawled.
Context: Application ‘Search_index_file_on_the_search_server’, Catalog ‘Search’
Details:
Access is denied.  Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository.  If the repository being crawled is a SharePoint repository, verify that the account you are using as “Full Read” permissions on the SharePoint Web Application being crawled. (0x80041205)

This was rather disheartening as the error message indicated that the fix was to do all of the things that I had already done.  It turns out that the answer was hidden in the start address.  You see, I had created the web application with <address1> and that was the Default address.  I had then created a multitude of Custom addresses and was using one of those as the main address for the website <address2>.  However, no matter how you are viewing the web page, the crawler will always use the Default address <address1>, but when you search the site while viewing it at <address2>, it cannot match them and the search engine returns “no results found”.  I resolved the problem by changing around the Default and Custom zone access mappings in: Central Administration -> Application Management -> Configure alternate access mappings.

However, I also had Audit Failures for logons with the Content Access Account.  After much gnashing of teeth and dead end searching, I stumbled upon this article: http://support.microsoft.com/kb/896861 which details a registry change that allows for loopback to a specific list of addresses.  The details are at the link, but I will also include what I did below:
  1. Click Start, Click Run, type regedit, and then click Ok.
  2. In the Registry Editor, locate and then click on the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0
  3. Right-click MSV1_0, point to New, and then click Multi-String Value.
  4. Type BackConnectionHostNames, and then press Enter.
  5. Right-click BackConnectionHostNames, and then click Modify.
  6. In the Value data box, type the host name or the host names for the sites that are on the local computer, and then click Ok.
  7. Quit Registry Editor, and then restart the IISAdmin Service
I included all of the alternate access mapping addresses in the list above.

My foundation search works fine now.  I am able to query and return information for sites and sub-sites for all of the content databases that I’ve identified for indexing.

August 30, 2011

SharePoint: Using custom CSS

Filed under: General,SharePoint — phil @ 3:03 pm

For the love of all that is holy, don’t try and do this in SharePoint designer.  I’m still not entirely sure what was going on, but it’s not pretty.

Let’s say you have a page that you want to put some custom CSS on.  You could open the page in designer and modify the corev4.css.  What this actually does is copy the file from the web application root to a _styles directory in your current site collection and then in the background re-link everything to it.  This is handy, except that the one thing that it doesn’t re-link to it is Themes.  Any themes you select still only make changes to the root corev4.css file, which (if you’re following along) is overridden by the copy you have in your site collection.  Short story: your new site collection isn’t themable.

The solution: Only update the styles you specifically want to change, and do it directly on the page your editing…in the web browser.  In the ribbon under Editing Tools:Format Text, there’s a drop down for HTML Markup.  Choose to Edit HTML Source, and just drop your custom css in between <style></style> tags and close the window.

The net upshot is that this doesn’t break the site definition template that your page is based on (which IS what happens when you try and do the above in SharePoint Designer).  Another bonus is that the style is truly page dependent.  If you reopen the HTML source, you’ll see that SharePoint has renamed your styles with an .ExternalClassHASH.

Slick, easy, and it doesn’t break anything.  Too bad it took me like 4 hours to figure out.  Hopefully I’ll save you the headache.

Addendum: If you’re editing a web part page, you may notice that you don’t have an HTML Markup button on your ribbon.  You can get around this issue by dropping a Content Editor web part on your page.  Once it’s there and you go to enter content, you will see that you now have an HTML Markup button.  You can drop the style into the content editor web part (in HTML view) and it will work the same way…just for that page.

August 9, 2011

SharePoint Designer: Getting the DateTime into a readable format

Filed under: SharePoint — phil @ 10:15 am
There are 3 view forms in SharePoint Designer for lists:
New
Edit
View
The default versions of these forms are created by SharePoint when the list is created.  They use a ListFormWebPart to display their information.  This is done by querying SharePoint for the objects directly.
When you create a custom form of any of the types specified above (New, Edit, or View), the default web part that is included is actually the DataFormWebPart.  This queries SharePoint for data and feeds it into an XML stream and the web part uses XSLT to parse data out of the string.  Unfortunately, this means that all parts of the SharePoint objects are serialized and thus you may run into problems rendering the data in string form.
Case in point, the DateTime object.  The DateTime object has fields for the Date and the Time separately.  When serialized, however, the data is concatonated into a single string.  This has two downsides, 1) A single string with no white space is hard to read, and 2) since the time is stored in zulu time (so that a particular user can display based on his chosen time zone), the string data is also displayed in zulu time.
There is, fortunately, a way around this.  If you edit the DataFormWebPart and do the following:
1.  Right above the table that contains the data rows, below the <xsl:template> tag, insert:
  • <xsl:param name=”Pos” select=”position()” />
2.  In the table row that contains the dates and times you wish to display:
  • Comment out the <xsl:value-of select=”*” />  with <!– –>
  • Add the following (inside of the <td> tags):
  • <SharePoint:FormField runat=”server”id=”ff3{$Pos}” ControlMode=”Display”FieldName=”EventDate”__designer:bind=”{ddwrt:DataBind(‘i’,concat(‘ff3′,$Pos),’Value’,’ValueChanged’,’ID’,ddwrt:EscapeDelims(string(@ID)),’@EventDate’)}”/>
  • <SharePoint:FieldDescription runat=”server” id=”ff3description{$Pos}” FieldName=”EventDate” ControlMode=”New”/>
3.  Replace parts of the code as follows:
  • id=ff#{$Pos} (where # is equal to the display row on the form)
  • ControlMode=”Display” (can be Edit, New, or Display)
  • FieldName=” “ (Use the same field name as the value-of select statement you commented out)
  • bind=” “ (in the bind string change the row position number and the field name)
  • Update the id, FieldName and ControlMode settings of the SharePoint:FieldDescription to match SharePoint:FormField
Viola, the date and time in a readable format and still encoded with the logged in user’s time zone choice.

June 20, 2011

Google Accounts

Filed under: General,Links — phil @ 10:52 am

So, like many people of my ilk I’ve had a gmail account since way back when you needed an invitation to join. (You actually might still need an invitation to join, I have not tried to create a new account recently.)

I’ve also had my own domain for more than 10 years now. It used to host its own mail, but after much gnashing of teeth, and the ability to transfer being made nearly transparent, I converted my domain hosted email to a Google Apps account. It worked flawlessly and I have been happy for many, many years.

…and then Google decided that it wanted it’s Google Apps accounts to have more of the functionality and features normally saved for it’s own full Google accounts. A noble desire, for sure, but it came with it problems of it’s own. For you see, Google accounts and Google Apps accounts used separate authentication structures. This allowed for one to be logged into both (or more) accounts at the same time. However, upon conversion, the Google Apps accounts now use the *same* authentication cookie as the Google accounts do. You can not log into both accounts at the same time, unless you use two different browsers.

So, option 1 is: Use two separate browsers to log into both accounts. While I am perfectly capable of doing that (and I use multiple browsers as it is for work), mentally, I was not willing to make that sacrifice to my normal workflow and have Google open in two different browsers (it used to be two tabs next to each other in Chrome.) There is some good news, though.

There is an option 2: You can configure your two accounts to allow for “multiple sign-ins”. What this does is both accounts are logged in (in the same tab even), but you’re only looking at content for one account at a time. So you can be looking at your mail, and then hit the account drop down in the top left and be looking at the mail for your other account. There are some caveats, but it’s all detailed here:
Using multiple accounts simultaneously. I’m using it now and so far I have no complaints.

June 6, 2011

SharePoint: PowerPivot gotcha

Filed under: PowerPivot,SharePoint — phil @ 9:42 am

PowerPivot is an interesting beast. If you don’t know what it is, I’ll give you a very brief (and probably incorrect overview): PowerPivot is an Excel plugin, provided free from Microsoft, that allows you to import data from external sources into your very own localized cube for analysis. It’s very, very fast AND very, very small (bordering on the impossible as far as compression goes.) However, it makes your Excel workbooks pretty big (still much smaller than the amount of data you can have, but big enough that sending it to someone via email is pretty much out of the question.) Enter PowerPivot for SharePoint. What this does is create a dedicated analysis services engine on your application layer and when you upload a workbook with powerpivot data, that data is separated out into the analysis services engine. Thus allowing anyone viewing to see the fruits of your labor without needing to install Excel and PowerPivot on their local desktop. There’s a catch, of course, there always is.

Installing PowerPivot on your SharePoint farm is no easy feat. Well, Microsoft claims its an easy feat, but only sort of, and only if you install PowerPivot and your farm at the same time. Most people don’t do that. In fact, no one should do that. In fact, if you’re thinking of doing that just stop now and redesign your farm. But that’s a different discussion. You want to install PowerPivot. And you want to install it on your existing farm. Good news: There’s lots of documentation. (How to: Install PowerPivot for SharePoint on an Existing SharePoint Server)

Here’s the gotcha. You’re going to read through all of that documentation. And you’re going to install the analysis services engine off of the SQL CD. And you’re going to configure a PowerPivot service application in SharePoint. And you’re going to create an unattended refresh account in the Secure Store.

And you will have a 50/50 chance that this will all work just fine. Here’s the rundown: If you installed your farm with default settings, and ran the configuration wizard with default settings, and installed SQL with default settings, it might work. If you, like most everyone else, used service accounts instead of the “local service” and the “network service”, then there is a fairly good chance you did all of the above and missed a critical step.

Let me first explain how the services work together. First off, Excel Calculation Services is what displays your workbook in the browser. In order for Excel Calculation Services to use the slicers against your PowerPivot data, it needs to be able to communicate with the analysis services cube on the application server. This is the unattended refresh account inside the Excel Calculation Services service application settings. In order for the analysis services engine to refresh the cube data from the original external source, it needs a separate account. This is the unattended refresh account inside the PowerPivot service application settings. Now, you’ve configured those accounts if you read the information from Microsoft above. But you get the dreaded error: “The data connection uses Windows Authentication and user credentials could not be delegated. The following connections failed to refresh: (name of connection in workbook for powerpivot cube).”

Here’s what you need to know: The account that is running “Claims to Windows Token Service” needs two more permissions. One is it needs to be a local administrator on the machine that has the PowerPivot analysis engine installed, and the second is that that account also needs the permission “Act as part of the operating system” that can be found in the Local Security Policy -> Local Policies -> User Rights Assignment. These are changes that need to be made to the server, not to SharePoint or SQL.

As it turns out, you can find that information here: http://msdn.microsoft.com/en-us/library/ff487975.aspx (It’s under Community Content, if you, like me, read the article and were still frustrated.)

Older Posts »

Powered by WordPress