Upgrade to SP2013, part 4: Upgrade Managed Metadata Service App

I will only upgrade one service app and that is the Managed Metadata service application, since I have built the company wiki based on a taxonomy tree there.

There is an article on technet that I followed to update this, but it failed. I tried it twice and got the same error and I am sure I missed something or did something wrong somewhere, but there is a limit on how many trials you do before you give up. So I did my own workaround and got it to work finally.

These are the recommended steps that I followed from Technet:
Detach the Managed metadata database on the SP2010 test server
Copy it to the new SP2013 server
Attach it to the SQL server
Then run this command in Powershell to upgrade the database:

$applicationPool = Get-SPServiceApplicationPool -Identity ‘SharePoint Web Services default’

$mms = New-SPMetadataServiceApplication -Name ‘Managed Metadata Service Application’ -ApplicationPool $applicationPool -DatabaseName ‘Managed Metadata Service’
(If you use a variable like $mms (or whatever you want to call it), you can refer to that when you create the proxy group):
New-SPMetadataServiceApplicationProxy -Name ‘Managed Metadata Service Connection’ -ServiceApplication $mms -DefaultProxyGroup

Did an iisreset just to be sure…

Right. So the result was an error message when I tried to access the Managed metatadata service app.

Opened the Properties to make sure it had the correct database (the upgraded db name) and application pool. And I actually created a new application pool and assigned the correct service account to that. That did not change anything.

Then I verified that the same service account has permissions to the database on the SQL server also.
Checked the Health Analyzer. Found a message about the managed metadata not being associated to the service apps:

Checked the “Configure service application associations” and the service apps were correctly associated.
Went back to Health analyzer and clicked on “Reanalyze now” and the message disappeared. Did not change anything.

Went into “Upgrade and Migration” and clicked on “Check upgrade status” and found the successful upgrade message so the database seems to be OK:


Clicked on the second Managed metadata service connection level and checked the second box also. I did not expect any changes from this, but just in case….

Went into “Security” and “Configure Service Accounts” and changed to another pool account and made an iisreset to see if that helped. No, did not change anything, still same error.

Checked the ULS and found this error:
Failed to get term store for proxy ‘Managed Metadata Service Connection’. Exception: System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.     at Microsoft.SharePoint.Taxonomy.Internal.XmlDataReader.GetDateTime(String name)     at Microsoft.SharePoint.Taxonomy.Internal.SharedTermStore.Initialize(IDataReader dataReader, Guid termStoreIdValue, Boolean fromPersistedData)     at Microsoft.SharePoint.Taxonomy.Internal.SharedTermStore..ctor(IDataReader dataReader, Guid termStoreId, Boolean fromPersistedData)     at Microsoft.SharePoint.Taxonomy.Internal.DataAccessManager.GetTermStoreData(MetadataWebServiceApplicationProxy sharedServiceProxy, Boolean& partitionCreated) 3a24169c-4427-a0a0-8c09-185263be83c0
04/28/2013 14:01:11.13  w3wp.exe (0x2EDC)                        0x1E80 SharePoint Server              Taxonomy                       8088 Warning  The Managed Metadata Service ‘Managed Metadata Service Connection’ is inaccessible. 3a24169c-4427-a0a0-8c09-185263be83c0

Googled these errors and found nothing that really related and could solve my problem (“the given key was not present”).

Gave up and tried my own solution, which worked immediately! Steps follows below:

My own solution
Detached the database
Deleted the Managed Metadata service application in CA that I had created in the steps above
Created a new managed metadata service application and also a new application pool account.
And I checked “Add this service application to the farms default list”.
Attached the upgraded database (from my former steps above, so the database needs to be upgraded)
Changed the properties of the service app, and added the name of the upgraded database
Iisreset just to be sure…
It worked immediately.

May not be the correct way, but at least this worked on first shot!

Share this blog post:

Upgrade to SP2013, part 3: What to migrate

Content databases
Number and size of databases. Any old that should not be moved over?

Custom solutions and branding
Identify: customizations, branding files, style sheets, farm solutions, customized web parts and copy/install all these files on the new server.
Make sure the third party products or any farm solutions are compatible with SP2013.

Database structure
Should we split the database into smaller databases? The test migration will be a good test for this, to see how long the upgrade takes. I really don’t want to split our intranet into different db’s because we would have to use different host headers or managed paths and it will confuse the users. Also since we are replicating, it means even more work to enable all those extra db’s for replication. I am sure that our db is quite small compared to what Microsoft recommends. Our intranet db is separated from the project portal, so the large amount of files should be moved out of the intranet.

Service apps
Search service app – NO
We are using pretty much out of the box for Search, I have only made some branding. So no, it will be fairly quick for us to setup a new index.

User Profiles – NO
Managed metadata – YES
We have not setup any company metadata because of the issue with editing multiple documents, that was in SP 2010. But now that you actually can make good use of it, we might consider adding it back. But I have used it for our company wiki so this will be needed.

Web Analytics – NO and shut down services
Analytics processing for SharePoint 2013 is now a component of the Search service, so stop the services before moving the content db’s. Web Analytics Data Processing Service and Web Analytics Web Service  services should be stopped and also go into “Monitoring” and disable the timer jobs for it. Maybe overkill but… better safe than sorry. Did not do that for this WSS_Content DB since the web app was not configured anyway! Test to see if any errors comes out of that.

PowerPoint Service App and Word Viewing Service App – NO
These apps were created to support our Chrome users, it is OfficeWebApps service applications. But they are moved out to their own server in SP2013, OfficeWebApps server.

SSL certificates
We use SSL on all our sites so this needs to be imported to the IIS.

Make sure all host headers are added to the AAM.

We have increased our quota templates on the SP server so these values needs to be added. And of course the upload size limit.

They should work as in SP2010, but that needs to be tested and verified.

Share this blog post:

Upgrade to SP2013, part 2: Clean up first!

I had a look at what Technet suggests as upgrade steps, and here is the table. I don’t see anything about preparing your database for Claims authentication, as was so much spoken of at the conference. It will have to be a trial and error.
On my SP2010 test server, I have a restored copy of our intranet and I will use that to test upgrade to SP2013 so I get as a real scenario as possible. Will be interesting to see what happens with our branding and structure… 🙂

Before I moved the content database to the new SP 2013 server, I did a clean up on my SP2010 server. Better of not moving shit over if it can be resolved first.

Looked inside Health Analyzer, and of course there are some messages in there about farm accounts (you know what I mean, I am sure about that!) but there are also stuff that really matters like orphan items, missing server side dependencies etc. I had a message about orphan items in the database, so I just clicked on “Repair automatically” and refreshed and it was gone. For the “missing server side dependencies” it requires a bit more work!

I ran this powershell command to test the content databases:
Test-SPContentDatabase -name WSS_Content -webapplication https://intranet.xxxxx | out-file e:upgradeupgrade.txt -width 500
Got the following results from that:

Category  : MissingFeature
Error        : True
UpgradeBlocking : False
Message         : Database [xxx] has reference(s) to a missing feature: Id = [xxx], Name = [Weather Web Part], Description = [Displays the Weather], Install Location = [WeatherWebpart].
Remedy          : The feature with Id xxx is referenced in the database [xxx], but is not installed on the current farm. The missing feature may cause upgrade to fail. Please install any solution which contains the feature and restart upgrade if necessary.

Category      : MissingSetupFile
Error           : True
UpgradeBlocking : False
Message         : File [FeaturesTaxonomy_WebPart_Feature1 Taxonomy_WebPartTaxonomy_WebPart.webpart] is referenced [1] times in the database [xxx], but is not installed on the current farm. Please install any feature/solution which contains this file.
Remedy          : One or more setup files are referenced in the database [xxx], but are not installed on the current farm. Please install any feature or solution which contains these files.

Category        : MissingAssembly
Error           : True
UpgradeBlocking : False
Message         : Assembly [xxx.Eventhandler, Version=, Culture=neutral, PublicKeyToken=xxx] is referenced in the database [xxx], but is not installed on the current farm. Please install any feature/solution which contains this assembly.
Remedy          : One or more assemblies are referenced in the database [xxx], but are not installed on the current farm. Please install any feature or solution which contains these assemblies.
Since the UpgradeBlocking status False on all of them, then I guess it’s no need to worry but I have tried to resolve most of these errors anyway to clean up the database. First thing is to check where a solution is added, and to do that just run this SQL query on your content db:

Select * from Webs where SiteId in (select SiteId from Features where FeatureId = ‘guid’)

And then I could go to that site and remove for instance webparts from the web part gallery that was no longer referenced. Removed them from both Recycle bin and Site collection recycle bin. Ran the test-spcontentdatabase cmdlet again and it was gone.

I also used FeatureAdmin2010 (amazing tool from Codeplex!) to search through the farm and removed the files that it suggested.

Share this blog post:

Upgrade to SP 2013, part 1: Hardware reqs (on test server)

And so let the fun begin! This will be a post with many parts, I begin with the easiest part… 🙂

After attending the awesome SharePoint Evolution Conference in London this April, I was so inspired to get started with SP2013 that I immediately put up a migration plan when I got back to the office. Although I had installed an SP2013 server a few months ago, I had not really had any time to test much. But now it’s time!

So I had an “old” test server that I created a few months ago with the following installed. I run all on the same machine since it is my test server:
– Windows Server 2012
– SQL Server 2012
– SharePoint 2013 (no updates installed yet)

But the machine needed more juice, I could barely start it. As I had seen at the conference the least minimum reqs for a server is to have 8 GB RAM and I only had 4 (!) but I got 12 GB so now it’s more responsive.

Minimum reqs according to Technet:


What I had on the server:

And what it was upgraded to:

Share this blog post:

InfoPath forms do not work with replication – Updated!

10 April update: That last version 5.1.7322.8 did not help solving this issue with InfoPath. It seems like the problem with templates has to do with forms that uses Data Connections. There is a new version 6 out that I have not installed yet, will come back with an update after that.

5 March: Awesome news! I got a mail from the vendor that the newest version will have a hotfix that will correct this issue. It will be released in a few weeks. Look forward to that, and will of course update this blog post if things go well.

This post is about enabling replication between servers using a replication software and the problems related to that. It works well for exchanging content, but when it comes to InfoPath there are some real problems. This is not yet solved so there will probably be a part 2 of this post.

I have published a lot of web forms (InfoPath forms services) on the intranet and when they start replicating they run into the problems listed below. Same thing happens to a list that you have customized, the customised form is lost so it looks like a regular list again. The replication vendor has no solution so far.

The error will run in circles because if you make an update on one server, that will replicate to the others and the errors will just keep on coming back. Here are the errors and why they happen.

Advanced settings are not replicated
The first time you set up replication on an InfoPath form and try to open a form on the site, it will load in InfoPath and not in the browser. That is because the “Advanced settings” of the library are not replicated. You need to go in to the library and set the “Open in browser” again and then you also have to open your form template and republish it:

Data connections are lost
When you have done this change, the form gives you a new message. Now the data connections can’t be accessed, since they have lost the connection to the secondary data sources:

So you have to go in and add those back again in the IP form template:

By this time, your form has start replicating and now all other target servers has gotten those same errors. So this will just keep on going around in circles.

So what I did was to stop replicating the “Content types” and it works HALFWAY. Because then another problem came up…

The xsn version of the form template changes
Of course when you have done a change locally on a template it will get a new version number. So now all versions of this template is not in sync. And the result of that is that the users get the below error message. If they click “OK” the form loads but still annoying:

I have the setting on my IP forms to “Automatically upgrade form version”, which is the default setting. If the user clicks on OK the form loads, but I don’t want that message! I compared versions of the form and on the source server we have version and on the target server we have and that is of course because I had to open the form and add the data sources and all the other steps I wrote about before, that were lost while replicating:

So I changed the version to be the same on both servers. And that helped. So now that message was gone. But this is only a temporary help, not a SOLUTION! Once a form is updated again, the problems will all come back. So I will test publishing a new template that has “Do nothing” instead of “Automatic update version” to see if that can help.

My hope now is that the vendor will deliver a solution for this soon, otherwise this is turning into a big EPIC FAIL.

Share this blog post:

An error occurred while verifying virtual directories on target

This is not an error in the Replication tool, but it caused the replication to stop working.

One day the replication on one of our servers stopped working. Inbound packages were just stuck in the queue, and when I tested the replication connection I got this message:

“An error occurred while verifying virtual directories on target”:

So I looked in the log files for the replication tool and found this error:

This started a journey of troubleshooting. First I did the standard things:
– ping all possible connections that I could think of that the server would take
– restarted the replication services
– iisreset
– disabled and enabled the replication on the web application
– restarted services again
– rebooted the server to make sure that no stupid updates or anything had made some changes
– checked the proxy settings (there had been a proxy change on all servers) and they were fine
– ran the replication configuration wizard (several times)
– ran the SharePoint configuration wizard (in case some update had made changes, farsighted but you are getting desperate…)

Everything looked just fine but still nothing was working.

When none of these steps work, that is always an indicator that something serious is going on.
I ended up re installing the replication software. But no luck.

The vendor had a look at this, my infrastructure collegues checked that all connections between the servers were working and nothing was blocking anything.

Turns out that there was a setting in registry that pointed to the old proxy IP and it was the farm account that had that setting in the browser. So when that was removed, it all started working again!
So a manual proxy configuration, pointing to an old IP address was the root cause to all this. As soon as that was removed, replication was running fine again.

Share this blog post:

SQL disks running out of space due to Recovery model set to Full

Since we started using the Replication software our SQL disks has filled up quite quickly. Of course we usually have alert systems warning us for disks running out of space but due to new system that had not yet been setup. So one day I could not get replication to work on one of our servers, all packages ran into Error and I could not change anything on the site without getting strange error messages. Looked in the Health Analyzer in Central Administration and found this error:

“Drives used for SQL databases are running out of free space”

And the disk space on that SQL server was of course full, so we had to first expand the disk with an extra 100 GB so that everything started working again. But that is only temporary panic solution 🙂 We also got these messages from the backup:

“Database XXX is configured to maintain transaction logs. Transaction log backups are not being performed. This will result in the log growing to fill all available disk space. Regular log backups should be scheduled or the database should be changed to the simple recovery mode.”

So I changed the Recovery model on the SQL db’s that were listed in that mail, from Full to Simple.


We run daily SQL backups so I believe that Simple recovery model is enough. If I need to restore a site (which is the most common restore scenario, since the Recycle bin was introduced in MOSS I have never had to make a restore for a single document). But I have at least a restore from one day back in history and that has never been a problem for us in restore scenarios. The difference between Full and Simple: to be able to go back to a specific time of failure, which can be done in Full mode.

Recommendation on Technet after switching to Simple recovery model:

Discontinue any scheduled jobs for backing up the transaction log.
Ensure periodic database backups are scheduled. Backing up your database is essential both to protect your data and to truncate the inactive portion of the transaction log.

I found a good explanation of Simple Recovery Model on MSSQL site:

The “Simple” recovery model is the most basic recovery model for SQL Server. Every transaction is still written to the transaction log, but once the transaction is complete and the data has been written to the data file the space that was used in the transaction log file is now re-usable by new transactions. Since this space is reused there is not the ability to do a point in time recovery, therefore the most recent restore point will either be the complete backup or the latest differential backup that was completed. Also, since the space in the transaction log can be reused, the transaction log will not grow forever as was mentioned in the “Full” recovery model.

NOTE: The recommended model would of course be to have a Full recovery model, since you can go back to the single point of failure time, but for us the Simple mode has been enough.

I look forward to hear what others think of this recovery model and if you have other recommended settings that helps running out of disk space 🙂

Share this blog post: