Sunday, October 29, 2017

The Role of Information Systems in Knowledge Management

We operate in an era where most of our decisions and work is based on the information at our disposal. Organizations are continually competing to gain knowledge on the basis of this information. With these, come more complex products and services meaning that learning will continue to be an inevitable process in the organizational life-cycle.

This is becoming increasingly discernible that in order for organizations to gain the competitive edge, their knowledge assets need to be able to create knowledge and transfer this knowledge in a manner that is difficult for competitors to imitate. This is further confirmed by Nonaka who defines organizational knowledge as "the capability of a company as a whole to create new knowledge, disseminate it throughout the organization, and embody it in products, services, and systems."

What Is Knowledge Management

Kakabadse et al, view the chain of knowledge flow as data-information-realization-action/reflection-wisdom (see Figure 1). Data comprises observations or less meaningful facts drawn out of context of use(Zack, 1999).

Figure 1: Chain of knowledge flow

Information is an upshot of committing data within some meaningful content, oft in the form of messages(Zack, 1999).

Knowledge is. thus, a “justified true belief”. Which essentially what people believe and value on the basis of conglomerated information (messages) through experience, communication of inference (Blacker, 1995).

An exercise of discipline or action is connoted by ones need to acquire the needful information and to realize the value of this information (Kakabadse et al., 2001)..
Therefore, realization can be regarded as information put to productive use and we may gain wisdom through action and realization, because according to Pascual-Leone, as cited by Kakabadse et al., (2001), “knowing how to use information in any given context requires wisdom”

Kakabadse et al, identify different disciplines which are very influential in the field of KM as:

  • Philosophy, in defining knowledge; prominent being
  • Cognitive science (in understanding knowledge workers);
  • Social science (understanding motivation, people, interactions, culture, environment); management science (optimizing operations and integrating them within the enterprise); information science (building knowledge-related capabilities);
  • Knowledge engineering (eliciting and codifying knowledge);
  • Artificial intelligence (automating routine and knowledge-intensive work) and
  • Economics (determining priorities).
  • This has also led to a plenitude of literary definitions for KM, in fact, a review by Hlupic et al. (2002) as cited by Bouthillier et al. (2002) identifies 18 different definitions of KM. While there have been many attempts to define KM from a theoretical perspective these efforts hardly reference the relationships between KM and IM (Bouthillier et al., 2002)

How does KM differ from IM?

The difference between KM and IM has been a topic of discussion. More often the two terms “knowledge” and “information” are used interchangeably.
This essay focuses more on the practicalities as opposed to the theories or philosophies associated with knowledge, or better yet, the management thereof. Therefore we shall adopt Nonaka's (1994) definition of knowledge, as cited by Alami M. et al (1999) as "a justified personal belief that increases an individual’s capacity to take effective action." 

This topic does not lend itself much to discussing the difference between data, information and knowledge since it is a debate which many authors have debated and one which extends beyond the scope of this topic, however we will adopt Vance's (1997) over-simplified definition of “information as data interpreted into a meaningful framework whereas knowledge is information that has been authenticated and thought to be true".

There are a few gap theories which attempt to justify the metamorphosis of data into information and that of information into knowledge. Again most, of these efforts paint a fuzzy picture especially when trying to explain how information becomes knowledge, or how knowledge becomes information (again), but Alami M., et al (1999), provide this quotation of Nonaka and Polanyi. “Information becomes knowledge once it is processed in the mind of an individual, and knowledge then becomes information again once it is articulated or communicated to others in the form of text, computer output, spoken, or written words or other means”.

Who Owns Knowledge Management?

Galliers R. et al, (2001), "argue that the field of Information Systems should no longer be distracted from its natural locus of concern and competence, or claim more than it can actually achieve". They consider Knowledge Management (KM) and Knowledge Management Systems (KMS) to be one of the latest fads alongside BPR which have been adopted by IS but yet have little to offer. On the contrary and with a little agreement to their assertions I believe that although KM as a concept and as a practice is more about people management, a lot can be attributed to IS and it (IS) has a crucial role to play in the realization of KM within organizations. This is also especially true when considering that, data and information are some of the key elements when it comes to the implementation of KMS. Other than that, and on the same breath, one could also reason that IS deals more with the management of data than the management of information because the function of IS is more about the processing of data than it is about the information articulated from the processing of these data. IS may not be the ultimate custodian of KM initiatives within organizations, but it surely plays a pivotal role in the identification, implementation and management of the technology and systems essential for the fruition of these initiatives. As a matter of fact, because of the ever-present competitor factor implicit in KM as a concept, Strategic Management, R&D and Customer Relations play as big a role as Human Resources and to a lesser extent Finance, but this essay focuses more on the significant role that IS has to play in KM.

Where does IS fit in?

IS is an enabling function which provides the platform and technical tools and resources and facilitates the implementation of KM initiatives within organizations, which makes me believe that KM would still exist without IS or technology element albeit it would be a difficult endeavor.

From an IS/IT perspective, the transformation of data management into information management is a rather smooth process since computers unreservedly lend themselves well to information systems. However things become more complex as we attempt to elicit knowledge out of these information systems. This is confirmed by Galliers, R. et al, “Whereas most people agree that data and information may exist outside humans, supporters of the community view of knowledge would argue that knowledge can never be separated from the knower and thus never stored digitally”

Sternmark D, argues that "all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information"

Figure 2 illustrates Lawton’s (2001) Knowledge Management System Architecture

Lindvall et al, state the necessity of a collection of technologies for authoring, indexing, classifying, storing, contextualizing and retrieving information, as well as for collaboration and application of knowledge in order to support KM. The robustness of the back-end and the user-friendliness of the front-end are also cited as the basic necessities of a software for KM

The bottom-most layer in the architecture, Information and Knowledge Sources, represents reservoirs for explicit knowledge.
The knowledge repository is therefore supported by tools in the Low level IT infrastructure

Classifying and indexing tools become useful in organizing knowledge fitting into the organizational content resulting in the creation of a “knowledge map” based on the taxonomy of the organization. At the personalized knowledge gateway level, knowledge is distributed to those who need it through portals.

According to Lindvall et al, it is challenging to draw a distinction between IT and KM. 

When looking at Lawton’s (2001) architecture model, KM is considered to reside in the upper layers while IT comprises the lower layers, albeit the bounds are blurry.

Tools and Technologies

According to a survey conducted by Skyrme. J, the first initiative of many KMS' involves the installation or the of Intranets and adding the best practice or "expert databases" while the second leading drive of KM strategies involves the creation of new knowledge, innovation and the transformation of this knowledge into valuable products and services. 

Knowledge management as a concept is nothing new to IS because the 1970s saw an increased interest in "expert systems" and artificial intelligence although they fell short of expectations. Skyme. J, describes as an era where we "where we tried to make computers think, rather than using computers to help humans think".

Listed below are some of the key technologies and tools identified by Skyrme. J, which may facilitate KM initiatives with organizations

Intranet,  Internet

The omnipresent Internet protocols which allow us to access “any information, any where, at any time”.

Groupware - (e.g. Microsoft Sharepoint, Lotus Notes)

These provide us with discussion databases and allow us to access the ‘organizational memory’, as well as current news feeds in areas of interest

Browsers and Client Software

These act as front-ends to information in many formats and many of the other knowledge tools such as document management or decision support.

Intelligent Agents

Due to the "problem of information overload intelligent agents can be trained to roam networks to select and alert users of  new relevant information". He makes an example of a related technology which "British Telecom have found can summarize large documents, retaining over 90 per cent of the relevant meaning with less than a quarter of the original text".

Document Management

Much of explicit knowledge is shared using documents, especially structured documents, are the form in which much explicit knowledge is shared, because these turn out to be active knowledge repositories.

Figure 2: Knowledge Management Sytem Architecture

Figure 2 represents Lawton’s (2001) widely referenced knowledge management system architecture where sources explicit knowledge are handled by the lowermost layer. This explicit knowledge lives database records and email messages. This bottom layer is supported by standard authoring tools such as word processors, mail and file servers and database management systems.

The infrastructure layer is also supported by file servers, intranets, Internet and document and content management systems.
This knowledge that is to be captured needs to be relevant and serve the needs of each organizational context. Therefore, indexing and classifying tools prove handy when creating a “knowledge map” based on the corporate taxonomy.

At the next level, tools are required to support, data, the discovery of knowledge and collaboration services.
Whenever, users and application (e.g. e-Learning, competence management, intellectual property management and CRM’s), need access to information, portals can be used to distribute this knowledge.

This model of architecture does not differentiate much between IT and KM tools, however it considers the all the layers from ‘knowledge repository” upwards to be more KM aligned, while IT comprises the layers below with no clear boundaries.


Practitioners should not impose traditional information processing models on KM initiatives no more than they should build to predict the future. However these knowledge management systems should be built to anticipate surprises based on past experiences and lessons learned. Almost every knowledge management initiative should take note of IS and related technologies as a crucial elements towards its success. Although it may take a while for computers to match humans when it comes to acquiring and transfer knowledge, due to the humans's advanced reasoning capabilities, computers are good at processing data. Knowledge Management Systems should thus continue to provide us with the relevant facts so that we can make better decisions quicker. 


  • Lindvall. M., Rus. I., Sinha. S.,2003,“Software Systems Support for Knowledge Management”, MCB UP Ltd.
  • Sternmark. D., Information vs. Knowledge: The Role of intranets in Knowledge Management, Knowledge Management Group, Viktoria Institute
  • Galliers, R. D. and Newell, S., “Back to the Future: From Knowledge Management to Data Management”, in Proceedings of ECIS 2001, Bled, Slovenia, 2001, pp. 609-615.
  • Skyrme. J., “Knowledge Management Solutions – The IT Contribution” , David Skyrme Associates Limited
  • Hlupic, V., Pouloudi, A., Rzevski, G., 2002, "Towards an integrated approach to knowledge management: `hard', `soft' and `abstract' issues", Knowledge and Process Management, 9, 1, 90-102.
  • Zack, M.H., 1999, "Managing codified knowledge", Sloan Management Review, 40, 4, 45-58.
  • Korac-Kakabadse, N., Kouzmin, A., Korac-Kakabadse, A., 2001, "From tacit knowledge to knowledge management: leveraging invisible assets", Knowledge and Process Management, 8, 3, July-September, 137-54.

Monday, February 29, 2016

Slowly Changing Dimension Type II: My Simple Personal Alternative To Merge

I've been working with the SQL Server's merge statement for some time now. I truly believe it's a great feature. Alas, the world out there is not so perfect. You'll soon run into a few hurdles, sooner than you know it. They can range from cross server limitations, deadlocks due to performance degradation. The list goes on and on...
Google: sql server merge statement limitations

I found myself having to come up with a custom solution. yes, a lot has been written on SCD Type II. Reinvent the wheel much???

I will create two tables...
1. dbo.customer table. transactional table
2. dbo.dim_customer table dimension table

create table dbo.customer
customer_id int identity(1,1) not null,
customer_name varchar(50) null,
city varchar(50) null,
favorite_car varchar(50) null

create table dbo.dim_customer
customer_id int null,
customer_name varchar(50) null,
city varchar(50) null,
favorite_car varchar(50) null,
row_is_current int not null,
row_effective_date date not null,
row_expire_date date not null

truncate table dbo.customer
dbcc checkident ('dbo.customer', reseed,1)
cte_customers as
select 'john' customer_name,'cape town' city,'jaguar' favorite_car union all
select 'paul' customer_name,'johannesburg' city,null union all
select 'peter' customer_name,'pretoria' city,null union all
select 'matthew' customer_name,'durban' city,null
insert into dbo.customer(customer_name,city,favorite_car)
select customer_name,city,favorite_car
from cte_customers

select * from dbo.customer

declare @not_expired datetime
declare @load_time datetime
set  @not_expired = cast('99991231' as date)
set  @load_time = cast(getdate() as date)

update t set
  t.row_is_current = -1,
  t.row_expire_date = @load_time
from dbo.customer s
  inner join dbo.dim_customer t on s.customer_id = t.customer_id
  and t.row_is_current = 1 and t.row_expire_date = @not_expired
where <>
  or t.favorite_car <> s.favorite_car

/*insert changes where row_is_current is -1*/
insert into dbo.dim_customer
select s.*,1 row_is_current,@load_time row_effective_date,@not_expired row_expire_date
from dbo.customer s
  inner join dbo.dim_customer t on s.customer_id = t.customer_id
where t.row_is_current = -1 and t.row_expire_date = @load_time

/*update changes set row_is_current to 0 where row_is_current is -1*/
update t set
  t.row_is_current = 0,
  t.row_expire_date = @load_time
from dbo.dim_customer t
where t.row_is_current = -1 and t.row_expire_date = @load_time

insert into dbo.dim_customer
select s.*,1 row_is_current,@load_time row_effective_date,@not_expired row_expire_date
from dbo.customer s
  left join dbo.dim_customer t on s.customer_id = t.customer_id
where t.customer_id is null

deleted rows, how does everyone deal with these anyways
cte_deleted_records as
select t.customer_id,row_is_current
from dbo.customer s
right join dbo.dim_customer t on s.customer_id = t.customer_id
where s.customer_id is null and row_is_current = 1
update dels
set dels.row_is_current = -999
--row_expire_date = @load_time /*maybe optional*/
from cte_deleted_records dels

select * from dbo.customer
select * from dbo.dim_customer
--truncate table dbo.dim_customer


update dbo.customer
set  favorite_car = 'jaguar'
where customer_id = 1

update dbo.customer
set  favorite_car = 'mazda',
  city = 'bloemfontein'
where customer_id = 2

insert into dbo.customer(customer_name,city,favorite_car)
  values ('mary','port elizabeth','ford')

select * from dbo.customer
delete customer where customer_id = 1

Thursday, May 21, 2015

Twenty Signs Your SSAS Implementation Will Fail Even If You Did Hire The Right SSAS Consultant

This post first appeared here on Thomas Ivarsson's blog...and it was initially titled: "Twenty signs that you have hired the wrong SSAS consultant" 

These are some of my observations during my years with Analysis Services since OLAP Services in 1999.

  1. All dimensions have parent child hierarchies. There is absolutely no excuse for this. Use parent-child hierarchies only when you have no other option.
  2. The data source view has named queries with a lot of transformations, name changes of columns, derived tables, sub queries and unions in the same mess.  The ETL system has moved in to the cubes and the source data warehouse (If there is one) is not properly structured. It is a clear indication of that the back end data warehouse is a mess.
  3. The measure names are still not right so all measure group measures have visibility = false and are replaced by calculated measures.
  4. The dimension sources consist of 5 to 15 views joined together and have several layers of views on top of each other, the last with different naming standards. This makes it very hard to follow the ETL flow back to the source tables.
  5. The dimension keys are text based, not integers.
  6. All dimensions have no attribute relations in the user or natural hierarchies.
  7. The natural hierarchies in the source dimension tables or views have undetected many-to many relations that will make the distribution of data across dimension members random. This can happen when you have hidden relations in the source table between attributes that are not primary keys.
  8. The cube dimensions have a lot of report hierarchies, which is a hierarchy where the attributes do not have a one to many relations from the top level to the subordinate attributes. Color and the size of a product is an example.
  9. The cube has no aggregations.
  10. The cube consist of more than 15 dimensions in a single measure group and many of the dimensions consist mainly of single attributes with 1-10 members and no hierarchies.
  11. The cube has 50-200 measures in a single measure group. Sometimes dimensions moves in to measures like actual, budget and forecast measures.
  12. The cube has 5000 rows of MDX code when you copy and paste it in to word and use 8 as the font size. To be fair you can be forced to do this by business requirements but you should warn about it from a maintenance perspective. It is very hard to debug.
  13. Measures have null for many dimensions. This is a clear sign that the data model behind the cube is not properly designed.
  14. The cube has no check for division by zero errors in calculated measures so users will see ugly error messages in the table cells.
  15. The consultants do not know what UAT is, has never seen a test protocol and do not know how to do tests while developing the cube.
  16. The consultant thinks that Kimball and Inmon are two soccerteams.
  17. The consultants do not know how to write documentation for a cube except for dropping all the MDX scripts in tables in word.
  18. Dimensions key errors, while processing have been set to ignore errors and discard records in a production system. This is a sign of no handling of orphan dimension foreign keys in the fact table. SSAS has a great feature to take care of this with missing dimension keys.  With this approach the cube will have a subset of  the same data as in the source table.
  19. The cube has 4-5 customer dimensions, not one customer dimension with 4-5 natural hierarchies. This is a migration from AS 2000 without a review of the cube model despite all the changes that came in SSAS 2005 and later.
  20. There are no partitions in the cube when the fact table has 50-100 million records.

Tuesday, March 11, 2014

Compare Data from Two Tables

There comes a time when you need to compare data from two structurally identical tables. When that time comes, you can either go the Excel route (I respect Excel), ask your boss to lend you the company credit card and purchase commercial package from the likes of RedGate®, or you could just code the darn thing.

I must say however, that I've looked at many different options, I've tried SSIS, SQL Server Data Tools, RedGate® Data Compare, and SQL Server's less known about tablediff.exe. None of these actually gave me the results I needed, or I just wasn't happy with the output format.

I found some nifty little piece of code written by Roman Pekar and this sort of made the job easier. I improved it a little and added a little bit of dynamism to the script. Now you can pass any two identical tables and let SQL compare the data for you.

Without further ado, here is the code...Please feel free to improve it. I tried to comment it as much as I could.
alter procedure table_compare
@first_table varchar(20),
@second_table varchar(20),
@id_column varchar(20),
@column_list varchar(max) = null
****** Author: Thato Mantai
****** Purpose: Compare data from two identical tables
****** Date: 11 March 2014
****** Version: 1.0
****** Credits: Roman Pekar ( ,
declare @xml_first_table xml   --xml structure to hold data from the first table
declare @xml_second_table xml   --xml structure to hold data from the second table
declare @sql_all_data varchar(4000)  --dynamic sql for storing unique records from both tables
declare @first_table_columns varchar(500) --stores all columns from first table
declare @second_table_columns varchar(500) --stores all columns from second table
declare @err_table_not_found varchar(200) set @err_table_not_found = 'Sorry, I can''t find specified table, "[plchldr_table_name]" in the current database.'
declare @err_key_column_not_found varchar(200) set @err_key_column_not_found = 'Sorry, I can''t find specified key column, "'+@id_column+'" in table "[plchldr_table_name]".'

if object_id(@first_table, 'u') is null  begin  set @err_table_not_found = replace(@err_table_not_found,'[plchldr_table_name]',@first_table)  raiserror(@err_table_not_found,18,1)  return - 1 end
if object_id(@second_table, 'u') is null  begin  set @err_table_not_found = replace(@err_table_not_found,'[plchldr_table_name]',@second_table)  raiserror(@err_table_not_found,18,1)  return - 1 end

if not exists(select 1 from sys.columns where object_name(object_id) = @first_table and name = @id_column)  begin   set @err_key_column_not_found = replace(@err_key_column_not_found,'[plchldr_table_name]',@first_table)   raiserror(@err_key_column_not_found,18,1)   return - 1  end
if not exists(select 1 from sys.columns where object_name(object_id) = @second_table and name = @id_column)  begin   set @err_key_column_not_found = replace(@err_key_column_not_found,'[plchldr_table_name]',@second_table)   raiserror(@err_key_column_not_found,18,1)   return - 1  end

/*let's start with a bit of housekeeping*/
if object_id('table_compare_all_data', 'u') is not null drop table table_compare_all_data; if object_id('temp_table_for_xml_1', 'u') is not null drop table temp_table_for_xml_1; if object_id('temp_table_for_xml_2', 'u') is not null drop table temp_table_for_xml_2;

/*get all columns from first and second, the else part still needs to be coded, suppose i could have used temp tables here...grrr*/
if @column_list is null begin  select  @first_table_columns = stuff(( select distinct top 100 percent '],[' + from sys.columns as t where  object_name(object_id) = @first_table order by '],[' + for xml path('')), 1, 2, '') + ']'  select  @second_table_columns = stuff(( select distinct top 100 percent '],[' + from sys.columns as t where  object_name(object_id) = @second_table order by '],[' + for xml path('')), 1, 2, '') + ']'  if @first_table_columns <> @second_table_columns  begin   raiserror('The two tables do not contain the same number of columns',18,1)   return - 1  end end

/*i suppose for this part i could have used "../ *[local-name()=sql:variable("@varname")]", but i ended up with errors. will look into this later */
set @first_table_columns = replace(@first_table_columns,'['+@id_column+']','['+@id_column+ '] as id_column') set @second_table_columns = replace(@second_table_columns,'['+@id_column+']','['+@id_column+ '] as id_column')
print @first_table_columns print @second_table_columns

/*build dynamic sql to store all data*/
set @sql_all_data = 'with cte_all_data as '+       '( '+        'select '+@first_table_columns+','+char(39)+'first_table'+char(39)+ ' source_table from ['+@first_table+'] '+         'union all '+        'select '+@first_table_columns+','+char(39)+'second_table'+char(39)+' source_table from ['+@second_table+']'+       ')'+      ' select * into table_compare_all_data from cte_all_data'

/*clean up the comma separated column list*/
if @column_list is null begin  set @first_table_columns = replace(@first_table_columns,'['+@id_column+ '] as id_column','id_column')  set @second_table_columns = replace(@second_table_columns,'['+@id_column+ '] as id_column','id_column') end
declare @sql_for_temp_1 varchar(500) declare @sql_for_temp_2 varchar(500)
set @sql_for_temp_1 = ' select top 5 * into temp_table_for_xml_1 from (' +       ' select '+@first_table_columns+' from table_compare_all_data where source_table = '+char(39)+'first_table'+char(39)+        ' except ' +        ' select '+@second_table_columns+' from table_compare_all_data where source_table = '+char(39)+'second_table'+char(39)+' ) as t1'      
set @sql_for_temp_2 = ' select top 15 * into temp_table_for_xml_2 from (' +       ' select '+@second_table_columns+' from table_compare_all_data where source_table = '+char(39)+'second_table'+char(39) +       ' except ' +        ' select '+@first_table_columns+' from table_compare_all_data where source_table = '+char(39)+'first_table'+char(39)+' ) as t1'
execute(@sql_for_temp_1) execute(@sql_for_temp_2)

select @xml_first_table = (select * from (select * from temp_table_for_xml_1) as a for xml raw('xml_first_table')) select @xml_second_table = (select * from (select * from temp_table_for_xml_2) as a for xml raw('xml_second_table'))
--print 'here'
--print @sql_all_data --print @sql_for_temp_2
--select * from table_compare_all_data
--select * from temp_table_for_xml_1
--select * from temp_table_for_xml_2
;with cte1 as (   select    t.c.value('../@id_column', 'nvarchar(20)') as id,    t.c.value('local-name(.)', 'nvarchar(128)') as name,    t.c.value('.', 'nvarchar(max)') as value   from @xml_first_table.nodes('xml_first_table/@*') as t(c)      ), cte2 as (   select    t.c.value('../@id_column', 'nvarchar(20)') as id,    t.c.value('local-name(.)', 'nvarchar(128)') as name,    t.c.value('.', 'nvarchar(max)') as value   from @xml_second_table.nodes('xml_second_table/@*') as t(c)       )
select   isnull(, as key_value,    isnull(, as column_name,    isnull(c1.value,'value not found in table '+ @first_table) as 'value_in_first_table', isnull(c2.value,'value not found in table '+ @second_table) as 'value_in_second_table'  from cte1 as c1   full outer join cte2 as c2 on = and =  where  not  (   c1.value is null and c2.value is null or   c1.value is not null and c2.value is not null and c1.value = c2.value  )
--table_compare 'user1','user2','uid'

Monday, March 10, 2014

How Can I Find How Much Space Each Table Uses In Powerpivot

If you are ever interested in finding out how much space each table uses in your PowerPivot model. This post might become useful.

Handling Slowly Changing Dimension Type II in PowerPivot

Think this is a very interesting, simple yet effective approach to handling SCD 2 in a BISM environment. This bypasses the need of having to follow the ofttimes long-winded ETL process, where you'd usually need to store all your historical data in relational tables. See Alberto Ferrari's post here PowerPivot and the Slowly Changing Dimensions

Thursday, September 12, 2013

Could not load file or assembly 'Microsoft.AnalysisServices.SharePoint.Integration

A lot of articles have been written about this particular issue. It seems like a really big issue with all the errors popping up everywhere, but in all honesty, it's bark is worse than its bite. I'll update this post later with a few online resources which I've found very useful while trying to solve this error.

The error usually goes something like this...

Error: Could not load file or assembly 'Microsoft.AnalysisServices.SharePoint.Integration, Version=, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified.
Option 1

Open up command prompt with elevated permissions and run this command

RD /S C:\Windows\Assembly\GAC_MSIL\Microsoft.AnalysisServices.SharePoint.Integration

In most cases you will get an Access Denied error. That is because w3wp.exe is holding onto the file. You will need to stop IIS and run the command again.

Option 2

1. After the installer fails disable the assembly cache viewer.
2. Go into the assembly and find the file Microsoft.AnalysisServices.SharePoint.Integration.dll
3. Copy this dll to the bootstrap location: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\SQLServer2008R2\x64
4. Remove the Powerpivot instance from the Program and Features .. SQL Server 2008 R2 Uninstall
5. Enable the assembly cache viewer and copy the Microsoft.AnalysisServices.SharePoint.Integration.dll into the GAC.
6. Run the installation process again to install the PowerPivot instance.