Author Archives: Johnson

About Johnson <<<--- Sexual beauty wishing to love kindness await you here.

People Picker not searching email address

One of my customers were complained about when they trying to search for a user in the people picker by email address it displays “No results found”. But they can search on username or id and it displays fine. They have the proper mappings in the “Manage User Properties” such as “Work Email” is mapped to the “mail” attributes. Their User Profile page does shows their email address but when they try to search for a user in the people picker by email address it says “No results found”.

As we know there are some changes in how we lookup users in AD using LDAP. In SharePoint 2007 and SharePoint 2010, we sent an LDAP request to the Domain Controllers using the application pool account using the long descriptive AD filters for the SearchRequest. But in SharePoint 2013, even though we still use the application pool account and still using LDAP request, the AD filter has changed and shortened. If you take a look at the following technet article, it talks about Ambiguous Name Resolution (ANR) attribute that AD evaluates when it receives an LDAP Search Request.By default, the following attributes are associated with ANR. However, some security reasons, the customer have turned off this attributes on the mail property for an external domain controller.

So this definitely can cause some issues in SharePoint 2013 if your SharePoint 2007 or SharePoint 2010 environments were previously resolving usernames based on their mail attribute in AD. In order to fix this, you need to perform the following:

– On the DC, open RUN and type REGSVR32 SCHMMGMT.DLL to register the Active Directory Schema Snap-in
– Open MMC console via RUN command
– Navigate to File > Add/Remove Snapin and load the Active Directory Schema MMC
– Expand Active Directory Schema [ServerName.Domain.Com] and select Attributes
– Locate the mail attribute and right-click properties
– Check ‘Ambiguous Name Resolution (ANR)’ and select Apply. Select OK
– Right-Click on ‘Active Directory Schema[ServerName.Domain.Com] and select ‘Reload the Schema’

Now you should be able to perform a search for value stored in the mail attribute in SharePoint 2013 and return results!

Leave a comment

Posted by on November 4, 2015 in Uncategorized


SharePoint 2013 – “Working on it” Slow pages

Many of my customers were escalating a common issue as they always “every day morning SharePoint 2013” will be slow at first time, but after that the performance of SharePoint 2013 pages are good. Their system engineers have done some googled and added the Curl URL and other techniques to resolve the issue. One point of time, this issue was escalated to me and I explained them what is happening when sharepoint 2013 shows “working on it”. First I asked them to enable Javascript debugging/error popups.The /_layouts/15/start.aspx page was stuck on “working on it” forever, with a javascript error:

Almost every page in SharePoint threw this error. A bit of JavaScript debugging revealed that SharePoint is trying to use window.localStorage object, which was the root cause for this error. In order to resolve this issue, we need to disable the local storage feature. Go to internet options –> advanced, scroll down to “Enable DOM Storage” and uncheck it’s checkbox.

This might not suitable for all the cases. But I would recommend you to enable the javascript debugging/error popups to find what is the root cause for your situation before you apply any remediation that are recommended in the internet.

1 Comment

Posted by on April 6, 2015 in Sharepoint


Port 808 – SharePoint 2013 Search Relation

One of my customer had been working on setting up SharePoint 2013 environment for the past couple of weeks. They were able to setup everything except configuring the search in multi-server configuration. We all know configuring the search in a stand-alone server is easy but when you want to configure it in a large scale with multi-server Search Service application, then you should have the proper plan before you configuring them. One of the main pre-requisites would be opening the Port 808 in all the servers to let Search SP 2013 Host Controller to communicate with.

In case, if you guys are experiencing the following errors while configuring the large search implementation, please make sure that Port 808 is enabled.

PS C:\> Set-SPEnterpriseSearchTopology -Identity $SearchTopologyName

Set-SPEnterpriseSearchTopology : Could not connect to the HostController service on server SERVERNAME Topology Activation could not be started.
At line:1 char:1
+ Set-SPEnterpriseSearchTopology -Identity $SearchTopologyName
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (Microsoft.Offic…tSearchTopology: SetSearchTopology) [Set-SPEnterpriseSearchTopology], InvalidTopologyException
+ FullyQualifiedErrorId : Microsoft.Office.Server.Search.Cmdlet.SetSearchT

If you are looking for a powershell script to provisioning the Enterprise Search with multi-server large farm, please refer the following URL.

Leave a comment

Posted by on November 13, 2014 in Uncategorized


Office 2007 & 2010 documents in the SHAREPOINT 2007 Search Results


One of our SharePoint 2007 user were compliant that the search results are not showing the documents. There was some inconsistent in the search results as it was not showing the documents for the keyword we provided although that keyword exists in the document.


I started to investigate the issue by looking at the crawl log. The crawl log was looks promising as it shows successfully crawled on all the documents in that document library. But search results does not show the correct results.
Initially, I thought this could be related to the “EnableOptimisticTitleOverride” as most of the documents title were same. So, I made the proper entries in RegEdit and restarted the full crawl. Still no luck!
This confirmed that there were issues with the documents title. So, I changed the XSLT of “Search Core Results” webpart to see the raw XML search results.

<xsl:stylesheet version=”1.0″ xmlns:xsl=”; >
<xsl:output method=”xml” version=”1.0″ encoding=”UTF-8″ indent=”yes” />
<xsl:template match=”/”>
<xmp><xsl:copy-of select=”*”/></xmp>

I could not find the correct results for the given keyword in the raw XML output from Search Component. Hence, I have confirmed that there were no problem with displaying the search results.I had a question in my mind that does the crawler really indexed the documents in that particular document library. Although the crawl log shows it was successfully crawled/indexed the documents, but I still would like to know the crawl job successfully indexed the documents.

So, I have started writing a tool (MOSSSearch) to check & see whether the crawled database contains the correct results at least. I made sure this tool should accept the input parameters from the administrators, build the SQL and query against the database.
Using this tool, I have concluded that crawled database does not contains the correct documents for the query keyword.

Then, I remember that we had setup this SharePoint 2007 environment back in 2009 and that time users were using Windows XP and Office 2007. We were installed Office 2007 FilterPack for indexing Office 2007 documents. But late in 2013, all our desktops have been migrated to Windows 7 with office 2010. Now, everyone is using the office 2010 documents and uploading them to the SharePoint 2007. But we had Office 2007 filterPack in that environment. Although SharePoint 2007 crawl log shows that Office 2010 documents were crawled/indexed successfully, but reality situation is SharePoint 2007 Crawler could not index the Office 2010 documents using Office 2007 filterPack.


I have installed the Office 2010 filterPack ( with SP2 ( and then restarted the OSearch (net stop osearch & net start osearch). The Search Results were showing the correct results which contains office 2007 & office 2010 documents.

Leave a comment

Posted by on July 30, 2014 in Sharepoint


Sharepoint Farm – 500 Internal Server Error


The SharePoint Site Page displays “500 – Internal Server Error” while accessing it after the successful SharePoint 2013 farm setup. The event viewer shows:

 “.NET Runtime version 4.0.30319.18408 – The profiler was loaded successfully.  Profiler CLSID: ‘AppDynamics.AgentProfiler’.  Process ID (decimal): 13688.  Message ID: [0x2507].”

From SharePoints Claims Authentication:

 “An exception occurred when trying to issue security token: Loading this assembly would produce a different grant set from other instances. (Exception from HRESULT: 0x80131401).”


By default SP2013 uses the legacy code access security model which is less configurable than the latest 4.0 model. Normally, .NET runtime security system walks the call stack to determine the code is authorized to access a resource or perform an operation, comparing the granted permissions of each caller to the permission being demanded. If any caller in the call stack does not have the demanded permission, a security exception is thrown and access refused.

Below is the small illustration of Security stack walk: (Courtesy: MSDN article on Code Access Security)

Code access security


As you’ve noticed in the Event Viewer, .NET Runtime throws that the profiler isn’t compatible with the legacy model. To resolve the issue, please update your web application or problematic web app’s web.config by setting legacyCasModel attribute to false as shown in below.

 <trust level=”Full” originUrl=”” legacyCasModel=”false” /> 

Leave a comment

Posted by on July 25, 2014 in Uncategorized


Troubleshooting SharePoint Performance Issues – High CPU Issues on IIS

Last couple of weeks, I have received an emergency notification from the service desk saying that SharePoint is running really slow. I replied to them it could be multiple reasons so I asked them was there any major outages on the network infrastructure, datastore, VM areas. He said No! — Then I started investigating the issue by looking into the servers. When I look at the servers I found out that one of WFE is utilizing high CPU by the SharePOint hosted on IIS. It could be due to heavy load or some custom code bug and so on. Hence, I would like to find out the root cause for the issue by using DebugDiag.

DebugDiag is a great debugging tool provided by the Microsoft. First, I started running the ProcessExplorer (SysInternal) tool to find out what the processes are taking CPUs. I found out w3wp.exe was taking 62%. So, I opened DebugDiag and collected a full user memory dump for the spiking w3wp.exe.

Once I create a dump, it created the following two files:
1. The *.dmp file havs the memory dump of the process containing information about the threads, stack trace, locks and so on.
2. The *.txt file contains a list of processes running at the time of dump collection.
We need to collect at least 2-3 memory dumps with an interval 4-5 seconds to ensure that we are troubleshooting in the correct path. Once it is done, go to the Advanced Analysis tab and add the dump file with the analysis script Crash\hang Analyzers as shown below than click on “Start Analysis”:

It will open up a .mht report in the browser. I have noticed that Thread 22 has triggered GC. This analysis report provides the following details:

  1. All running threads along with CPU time.
  2. List of all running requests (under Client Connections).
  3. Stack trace of each thread.
  4. Process details like up time, process ID and so on.
  5. Server and OS details.

I then further start the analysis the dump using the WinDbg tool and determine why GC has been triggered. Based on the analysis, I have concluded the following:

–       High CPU utilization may occur due to an infinite loop in code or heavy load or by exhausting memory and in turn triggering GC, which spikes CPU.

–       To troubleshoot the issue, first I restarted IIS using the IISRESET command and collected 10 sets of memory dumps at the time of the CPU spike.

–       Analyzed each dump individually using Advanced Analysis of DebugDiag.

–       Looked at the thread running for a long time and analyzed the stack trace from bottom to top.

–       Digged into the method having the issue as per stack trace and advised the developer to fine tune or correct the code.

–       Some time If the advanced analysis might not help much then I always use  Debugging Tools for Windows specifically the WinDbg tool to analyze the dump.

  • The WinDbg tool is best suited to troubleshoot .NET (Managed) applications, that have extensions to dump the CLR Stack, Heap objects and so on. 
Leave a comment

Posted by on July 24, 2014 in Uncategorized


Troubleshooting Utility Patent Published

In a real-time production environment, it is hard to identify the root cause for any critical issues like performance, security threats, broken authentication, security mis-configuration, etc. While participating in the investigation for the root cause, technical support team validates the issues in production environment, using the structured monitoring, and operations processes according to the established frameworks such as Information Technology Infrastructure Library (ITIL), Information Technology (IT) Services Management, and Information Technology (IT) Operational Frameworks. The technical support team investigates the root cause by troubleshooting the possible devices, such as network infrastructure device, application servers and database server. Using common troubleshooting and diagnostic tools such as Performance Monitor and diagnostic approaches such as cause/effect diagrams, it is real hard for technical support team to identify the root cause and resolution for the issue.

In order to find where exactly issue is occurring or to find the root cause of web application issue, the web application support team needs to depend on multiple team members from network infrastructure team, database team and computer server team. The web application support team needs to look for their appropriate time to discuss the issue. In addition, they need to spend a lot of time doing information gathering about the web application and the issue prior to actually identifying the root cause and troubleshooting the issue. Thus, it could take several days to resolve a simple issue in production environment.

A traditional web browser does not provide the capability to find the root cause of web application issue. Although popular browser extensions such Firebug, HTTPWatch, Fiddler or Web Developer tools provides the ability to extend the web browser functionality for analyzing HTTP traffic and allowing inspect, edit and monitor CSS, HTML, JavaScript and Net requests in the web document. However, these extensions do not provide ability to pinpoint which component causes the issue and lack of providing information related to the issue outside of client computer.

However, there are multiple entities to be examined to determine the root cause of an issue, such entities are network infrastructure devices like Firewall, Load Balancer, Network Interface Card (NIC), and computer servers like web application server, Domain Name Server, Proxy Server and Database Server configuration, HTTP and Network Traffic analysis. There are multiple tools like network tracer, dump file analyzer, system log viewer, database profiler, log parser and HTTP analyzer that are available to intercept the issue in various aspects to find the root cause by the technical support team. Also, popular systems include OpenView® from Hewlett-Packard Corporation, Unicenter® from Computer Associates, IBM Tivoli® Framework exists for managing entire enterprise-level networks and application. Another example of such a system is the Solarwinds® Orion® Network Performance Monitor and Application Monitor. However, these systems does not provide the web browser extension that helps user to determine the root cause for an issue and does not pin-point on which component causes the issue and what are the recommended solution from internet experts for the specified issue.

In addition, to determine the root cause, the web application support team needs to go the through log files of each network infrastructure device and the computer servers presented in the web application network topology diagram. Each log will be huge and may contain a lot of general logging information about the device. This general logging information is unrelated to the issue at hand. The vast content may make it harder for the web application support team to find and extract logging information related to the issue.

In order to use the diagnostic tools, web application support team needs to be trained to use these tools for finding the root cause. There is no such system or method to provide an easy and elegant way to identify the root cause of the issue at the client (i.e., application support team personnel) web browser plug-in panel without logging into the individual computer server or network infrastructure devices.

In order to overcome such problems, I have designed, developed an utility to extend the web browser capabilities for identifying the root cause of web application issue and providing the experts recommendation for the issue through web search. It provides a quick and simple method for identifying the issue occurred in a web application. In particular, there is an interest in a troubleshooting tool that allows a user to easily identify the issue and provide the possible workaround or resolution for the issue over the web browser.

You can read more about it from here:

Here is the screenshot of the utility:


Posted by on July 21, 2014 in Uncategorized