Insert in table with unique index
Hi
I Create a table save a factor for to calculate date, but other 2 columns are key table
CREATE TABLE TMP_FATOR
SETID VARCHAR2(5 BYTE) NOT NULL,
COMPANYID VARCHAR2(15 BYTE) NOT NULL,
FATOR NUMBER
CREATE UNIQUE INDEX IDX_TMP_FATOR ON TMP_FATOR
(SETID, COMPANYID)
NOLOGGINGI want to insert in table , but skip errors , I tried with
declare
i number;
begin
i:=1;
EXECUTE IMMEDIATE 'TRUNCATE TABLE SYSADM.TMP_FATOR';
BEGIN
INSERT INTO /*+ APPEND*/ SYSADM.TMP_FATOR
SELECT T1.SETID,
T1.COMPANYID,
SYSADM.pkg_ajusta_kenan.fnc_fator_dias_desconto(T1.SETID,T1.COMPANYID) fator
FROM SYSADM.PS_LOC_ITEM_SN T1;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
NULL;
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(SQLERRM);
END;
COMMIT;
end;But did not work
Why ?
The determinisic keyword is just part of the
declaration whether declaring a standalone function
or a packaged function.
SCOTT @ nx102 Local> create package test_pkg
2 as
3 function determin_foo( p_arg in number )
4 return number
5 deterministic;
6 end;
7 /
Package created.
Elapsed: 00:00:00.34
1 create or replace package body test_pkg
2 as
3 function determin_foo( p_arg in number )
4 return number
5 deterministic
6 is
7 begin
8 return p_arg - 1;
9 end;
0* end;
SCOTT @ nx102 Local> /
Package body created.
Elapsed: 00:00:00.14JustinCan I to have other procedures and functions inside pacckage ?
Similar Messages
-
Constantly inserting into large table with unique index... Guidance?
Hello all;
So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
This DB is about 1.7 TB of small record data.
One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
Now what we are observing is that the inserts into this table
- Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
- Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
- If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.Hello,
Here is a link to a blog article that will give you the right questions and answers which apply to your case:
http://jonathanlewis.wordpress.com/?s=delete+90%25
As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
(a) unique index (sourceid, timestamp)
(b) index(create time)
Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
create index indexname (sourceid, timestamp) compress;
or
alter index indexname rebuild compress; You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
Best Regards
Mohamed Houri -
Insert with unique index slow in 10g
Hi,
We are experiencing very slow response when a dup key is inserted into a table with unique index under 10g. the scenario can be demonstrated in sqlplus with 'timing on':
CREATE TABLE yyy (Col_1 VARCHAR2(5 BYTE) NOT NULL, Col_2 VARCHAR2(10 BYTE) NOT NULL);
CREATE UNIQUE INDEX yyy on yyy(col_1,col_2);
insert into yyy values ('1','1');
insert into yyy values ('1','1');
the 2nd insert results in "unique constraint" error, but under our 10g the response time is consistently in the range of 00:00:00.64. The 1st insert only took 00:00:00.01. BTW, if no index or non-unique index then you can insert many times and all of them return fast. Under our 9.2 DB the response time is always under 00:00:00.01 with no-, unique- and non-unique index.
We are on AIX 5.3 & 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production.
Has anybody seen this scenario?
Thanks,
DavidIt seems that in 10g Oracle simply is doing something more.
I used your example and run following script on 9.2 and 10.2. Hardware is the same i.e. these are two instances on the same box.
begin
for i in 1..10000 loop
begin
insert into yyy values ('1','1');
exception when others then null;
end;
end loop;
end;
/on 10g it took 01:15.08 and on 9i 00:47.06
Running trace showed that in 9i there was difference in plan of following recursive sql:
9i plan:
select c.name, u.name
from
con$ c, cdef$ cd, user$ u where c.con# = cd.con# and cd.enabled = :1 and
c.owner# = u.user#
call count cpu elapsed disk query current rows
Parse 10000 0.43 0.43 0 0 0 0
Execute 10000 1.09 1.07 0 0 0 0
Fetch 10000 0.23 0.19 0 20000 0 0
total 30000 1.76 1.70 0 20000 0 0
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS BY INDEX ROWID CDEF$
0 INDEX RANGE SCAN I_CDEF4 (object id 53)
0 TABLE ACCESS BY INDEX ROWID CON$
0 INDEX UNIQUE SCAN I_CON2 (object id 49)
0 TABLE ACCESS CLUSTER USER$
0 INDEX UNIQUE SCAN I_USER# (object id 11)10g plan
select c.name, u.name
from
con$ c, cdef$ cd, user$ u where c.con# = cd.con# and cd.enabled = :1 and
c.owner# = u.user#
call count cpu elapsed disk query current rows
Parse 10000 0.21 0.20 0 0 0 0
Execute 10000 1.20 1.31 0 0 0 0
Fetch 10000 2.37 2.59 0 20000 0 0
total 30000 3.79 4.11 0 20000 0 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
0 HASH JOIN (cr=2 pr=0 pw=0 time=301 us)
0 NESTED LOOPS (cr=2 pr=0 pw=0 time=44 us)
0 TABLE ACCESS BY INDEX ROWID CDEF$ (cr=2 pr=0 pw=0 time=40 us)
0 INDEX RANGE SCAN I_CDEF4 (cr=2 pr=0 pw=0 time=27 us)(object id 53)
0 TABLE ACCESS BY INDEX ROWID CON$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_CON2 (cr=0 pr=0 pw=0 time=0 us)(object id 49)
0 TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us)So in 10g it had hash join instead of nested loop join at least for this particular select. Probably time to gather stats on sys tables?
The difference in time wasn't so big though 4.11 vs 1.70 so it doesn't explain all the time taken.
But you can probably check whether you haven't more difference.
Also you can download Thomas Kyte runstats_pkg and run it on both environments to compare whether some stats or latches haven't very big difference.
Gints Plivna
http://www.gplivna.eu -
I have an index constraint "IX_Tag_Processed" on the field "Tag_Name" for the table "Tag_Processed". I keep getting this constraint error:
Msg 2601, Level 14, State 1, Line 15
Cannot insert duplicate key row in object 'etag.Tag_Processed' with unique index 'IX_Tag_Processed'. The duplicate key value is (AZPS_TEMUWS0110BL4_CISO).
The statement has been terminated.
For this INSERT: I have tried using tagstg.Tag_Name NOT IN with same result:
INSERT into [Forecast_Data_Repository].[etag].[Tag_Processed] (Tag_Name, Tag_Type,Start_Datetime, End_Datetime, Source_SC, Sink_SC, Source_CA, Sink_CA, Source, Sink, Load_dt, Energy_product_code_id)
SELECT DISTINCT (Tag_Name), Tag_Type,Start_Datetime, End_Datetime, Source_SC, Sink_SC, Source_CA, Sink_CA, Source, Sink, GETUTCDATE(), [Forecast_Data_Repository].rscalc.GetStubbedEngProductCodeFromStaging(tagstg.Tag_Name)
FROM [Forecast_Data_Repository].[etag].[Tag_Stg] tagstg
WHERE tagstg.Id BETWEEN @minTId AND @maxTId --AND
--tagstg.Tag_Name NOT IN (
-- SELECT DISTINCT tproc.Tag_Name from [Forecast_Data_Repository].[etag].[Tag_Processed] tproc
thank you in advance,
Greg HansonI have even tried a merge with the same constraint error,
DECLARE @minTId bigint, @minTRId bigint, @minEId bigint
DECLARE @maxTId bigint, @maxTRId bigint, @maxEId bigint
DECLARE @errorCode int
DECLARE @ReturnCodeTypeIdName nvarchar(50)
SELECT @minTRId = Min(Id) FROM [etag].[Transmission_Stg]
SELECT @maxTRId = Max(Id) FROM [etag].[Transmission_Stg]
SELECT @minTId = Min(Id) FROM [etag].[Tag_Stg]
SELECT @maxTId = Max(Id) FROM [etag].[Tag_Stg]
DECLARE @MergeOutputTag TABLE
ActionType NVARCHAR(10),
InsertTagName NVARCHAR(50)
--UpdateTagName NVARCHAR(50)
--DeleteTagName NVARCHAR(50)
DECLARE @MergeOutputEnergy TABLE
ActionType NVARCHAR(10),
InsertTagId BIGINT
--UpdateTagName NVARCHAR(50)
--DeleteTagName NVARCHAR(50)
DECLARE @MergeOutputTransmission TABLE
ActionType NVARCHAR(10),
InsertTagId BIGINT
--UpdateTagName NVARCHAR(50)
--DeleteTagName NVARCHAR(50)
MERGE [Forecast_Data_Repository].[etag].[Tag_Processed] tagProc
USING [Forecast_Data_Repository].[etag].[Tag_Stg] tagStg
ON
tagProc.Tag_Name = tagStg.Tag_Name AND
tagProc.Tag_Type = tagStg.Tag_Type AND
tagProc.Start_Datetime = tagStg.Start_Datetime AND
tagProc.End_Datetime = tagStg.End_Datetime AND
tagProc.Source_SC = tagStg.Source_SC AND
tagProc.Source_CA = tagStg.Source_CA AND
tagProc.Sink_CA = tagStg.Sink_CA AND
tagProc.Source = tagStg.Source AND
tagProc.Sink = tagStg.Sink
WHEN MATCHED THEN
UPDATE
SET Tag_Name = tagStg.Tag_Name,
Tag_Type = tagStg.Tag_Type,
Start_DateTime = tagStg.Start_Datetime,
End_Datetime = tagStg.End_Datetime,
Source_SC = tagStg.Source_SC,
Sink_SC = tagStg.Sink_SC,
Source_CA = tagStg.Source_CA,
Sink_CA = tagStg.Sink_CA,
Source = tagStg.Source,
Sink = tagStg.Sink,
Load_dt = GETUTCDATE()
WHEN NOT MATCHED BY TARGET THEN
INSERT (Tag_Name, Tag_Type, Start_Datetime, End_Datetime, Source_SC, Sink_SC, Source_CA, Sink_CA, Source, Sink, Load_dt)
VALUES (tagStg.Tag_Name, tagStg.Tag_Type, tagStg.Start_Datetime, tagStg.End_Datetime, tagStg.Source_SC, tagStg.Sink_SC, tagStg.Source_CA, tagStg.Sink_CA, tagStg.Source, tagStg.Sink, GETUTCDATE())
OUTPUT
$action,
INSERTED.Tag_Name
--UPDATED.Tag_Name
INTO @MergeOutputTag;
SELECT * FROM @MergeOutputTag;
Greg Hanson -
Cannot insert duplicate key row in object 'dbonavnodes' with unique index 'navnodes_altpk'
Hi there,
I have a problem and very urgent. I have tried the following 'INSERT' command with failure. The failure is shown below, saying duplicate key row with unique index 'NavNodes_AltPK'.
INSERT INTO [NavNodes] ([SiteId], [WebId], [Eid], [EidParent], [NumChildren], [RankChild],[ElementType], [Url], [DocId], [Name],[NameResource], [DateLastModified], [NodeMetainfo], [NonNavPage], [NavSequence], [ChildOfSequence],[IsDocLib],[QueryString]) values
('268DE498-61D8-47DB-8A69-4B8EB8557A51', 'CF4CCC82-F00F-4731-8210-CE3FE3D1E324',1025 ,0 ,2 ,0 ,1 ,'', NULL, 'Quick launch','Quick launch',getdate() ,NULL ,1 ,1 ,0,0,NULL)
As far as i am aware that there are 5 fields with unique index. They are: SiteID, WebID, EId, EIdParent, RankChild.
- what are the values of EidParent and RankChild if the Eid is 1025?
- what are the values of EidParent and RankChild if the Eid is 1002?
Thanks much.Hi,
What build of SharePoint are you running. The error is similar to:
http://blogs.msdn.com/b/joerg_sinemus/archive/2013/02/12/february-2013-sharepoint-2010-hotfix.aspx
Also, to check what values are duplicate, please execute the following query:
SELECTTOP(20)Count(nav.Eid)AS 'DuplicateCount', nav.DocId, ad.Dirname, ad.Leafname FROM NavNodesAS nav with(nolock)INNER JOIN AllDocsAS ad with(nolock)ON nav.Docid = ad.Id WHERE nav.EidParent= 1025 AND DocID IS NOT NULLGROUP BY nav.DocId, ad.DirName, ad.LeafNameORDER BY 'DuplicateCount' DESC
Following article explains what all are unique identifiers and description of each fields in NavNodes table:
http://msdn.microsoft.com/en-us/library/dd585180(v=office.11).aspx
Hope it helps!
Thanks,
Avni Bhatt
If this helped you resolve your issue, please mark it Answered -
We have been seeing the following 'warnings' in the event log of our BizTalk machine since upgrading to BTS 2006. They seem to occur randomly 6 or 8 times per day.
Does anyone know what this means and what needs to be done to clear it up? we have only one BizTalk server which is running on only one machine.
I am new to BizTalk, so I am unable to find how many tracking host instances running for BizTalk server. Also, can you please let me know that we can configure only one instance for one server/machine?
Source: BAM EventBus Service
Event: 5
Warning Details: Execute batch error. Exception information: TDDS failed to batch execution of streams. SQLServer: bizprod, Database: BizTalkDTADb.Cannot insert duplicate key row in object 'dta_MessageFieldValues'
with unique index 'IX_MessageFieldValues'. The statement has been terminated..Other than ensuring that there exists a separate and single tracking host instance, you're getting an error about duplicate keys.. which implies that you're trying to Create a BAM Activity twice with the same data.
I suggest you have a in-depth examination of the BAM (TPE or API) associated with the orchestration. In TPE ensure that the first binding you select is the "Instance Id" or "Message Id" before going ahead to map the ports or others.
Regards. -
When we try to deploy a wsp to sharepoint containing code to generate quick launch menu we get the following error messages when running the last enable-spfeature command in powershell. The same code is working in the development environment, but when we
deploy to a test server the following error occurs:
Add-SPSolution "C:\temp\ImpactSharePoint.wsp"
Install-SPSolution -Identity impactsharepoint.wsp -GACDeployment
Enable-SPFeature impactsharepoint_branding -url http://im-sp1/sites/impact/
Enable-SPFeature impactsharepoint_pages -url http://im-sp1/sites/impact/
From UlsViewer.exe:
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database 880i
High System.Data.SqlClient.SqlException (0x80131904): Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_PK'. The duplicate key value is (6323df8a-5c57-4d3e-a477-09aa8b66100a, 7ae114df-9d52-4b08-affa-8c544cbc27b6,
1000). The statement has been terminated. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject
stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj,
Boolean& dataReady) at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior
runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior
cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
behavior) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, CommandBehavior behavior, SqlQueryData monitoringData, Boolean retryForDeadLock) ClientConnectionId:2bb4004c-aa75-470e-b11e-dbf1c476aaed
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database 880k
High at Microsoft.SharePoint.SPSqlClient.ExecuteQueryInternal(Boolean retryfordeadlock) at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadlock) at Microsoft.SharePoint.Library.SPRequestInternalClass.AddNavigationNode(String
bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified) at Microsoft.SharePoint.Library.SPRequestInternalClass.AddNavigationNode(String
bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified) at Microsoft.SharePoint.Library.SPRequest.AddNavigationNode(String
bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified) at Microsoft.SharePoint.Navigation.SPNavigationNode.AddInternal(Int32
iPreviousNodeId, Int32 iParentId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav) at Microsoft.SharePoint.Navigation.SPNavigationNodeCollection.AddInternal(SPNavigationNode node, Int32 iPreviousNodeId) at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.<ConfigureQuickLaunchBar>b__0()
at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass5.<RunWithElevatedPrivileges>b__3() at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode) at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback
secureCode, Object param) at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated secureCode) at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.ConfigureQuickLaunchBar() at ImpactSharePoint.Features.Pages.PagesEventReceiver.FeatureActivated(SPFeatureReceiverProperties
properties) at Microsoft.SharePoint.SPFeature.DoActivationCallout(Boolean fActivate, Boolean fForce) at Microsoft.SharePoint.SPFeature.Activate(SPSite siteParent, SPWeb webParent, SPFeaturePropertyCollection props, SPFeatureActivateFlags
activateFlags, Boolean fForce) at Microsoft.SharePoint.SPFeatureCollection.AddInternal(SPFeatureDefinition featdef, Version version, SPFeaturePropertyCollection properties, SPFeatureActivateFlags activateFlags, Boolean force, Boolean fMarkOnly)
at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtWeb(Boolean fActivate, Boolean fEnsure, Guid featid, SPFeatureDefinition featdef, String urlScope, String sProperties, Boolean fForce) at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtScope(Boolean
fActivate, Guid featid, SPFeatureDefinition featdef, String urlScope, Boolean fForce) at Microsoft.SharePoint.PowerShell.SPCmdletEnableFeature.UpdateDataObject() at Microsoft.SharePoint.PowerShell.SPCmdlet.ProcessRecord()
at System.Management.Automation.CommandProcessor.ProcessRecord() at System.Management.Automation.CommandProcessorBase.DoExecute() at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object
input, Hashtable errorResults, Boolean enumerate) at System.Management.Automation.PipelineOps.InvokePipeline(Object input, Boolean ignoreInput, CommandParameterInternal[][] pipeElements, CommandBaseAst[] pipeElementAsts, CommandRedirection[][]
commandRedirections, FunctionContext funcContext) at System.Management.Automation.Interpreter.ActionCallInstruction`6.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame
frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.Interpreter.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.LightLambda.RunVoid1[T0](T0
arg0) at System.Management.Automation.DlrScriptCommandProcessor.RunClause(Action`1 clause, Object dollarUnderbar, Object inputToProcess) at System.Management.Automation.CommandProcessorBase.DoComplete() at System.Management.Automation.Internal.PipelineProcessor.DoCompleteCore(CommandProcessorBase
commandRequestingUpstreamCommandsToStop) at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate) at System.Management.Automation.Runspaces.LocalPipeline.InvokeHelper()
at System.Management.Automation.Runspaces.LocalPipeline.InvokeThreadProc() at System.Management.Automation.Runspaces.PipelineThread.WorkerProc() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext,
ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext
executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database 880j
High SqlError: 'Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_PK'. The duplicate key value is (6323df8a-5c57-4d3e-a477-09aa8b66100a, 7ae114df-9d52-4b08-affa-8c544cbc27b6, 1000).'
Source: '.Net SqlClient Data Provider' Number: 2601 State: 1 Class: 14 Procedure: 'proc_NavStructAddNewNode' LineNumber: 92 Server: 'IMPACTCLUSTER\IMPACTDB'
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database 880j
High SqlError: 'The statement has been terminated.' Source: '.Net SqlClient Data Provider' Number: 3621 State: 0 Class: 0 Procedure: 'proc_NavStructAddNewNode' LineNumber: 92 Server: 'IMPACTCLUSTER\IMPACTDB'
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database tzku
High ConnectionString: 'Data Source=IMPACTCLUSTER\IMPACTDB;Initial Catalog=WSS_Content;Integrated Security=True;Enlist=False;Pooling=True;Min Pool Size=0;Max Pool Size=100;Connect Timeout=15;Application Name=SharePoint[powershell][1][WSS_Content]'
Partition: 6323df8a-5c57-4d3e-a477-09aa8b66100a ConnectionState: Closed ConnectionTimeout: 15
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database tzkv
High SqlCommand: 'BEGIN TRAN DECLARE @abort int SET @abort = 0 DECLARE @EidBase int,@EidHome int SET @EidBase = 0 SET @EidHome = NULL IF @abort = 0 BEGIN EXEC @abort = proc_NavStructAllocateEidBlockWebId @wssp0, @wssp1,
@wssp2, @wssp3, @EidBase OUTPUT SELECT @wssp4 = @EidBase, @wssp5 = @abort END IF @abort = 0 BEGIN EXEC @abort = proc_NavStructAddNewNodeByUrl '6323DF8A-5C57-4D3E-A477-09AA8B66100A','7AE114DF-9D52-4B08-AFFA-8C544CBC27B6',1,2072,-1,0,N'sites/impact/default.aspx',N'PersonSøk',N'PersonSøk',NULL,0,0,0,NULL,@EidBase,@EidHome
OUTPUT SELECT @wssp6 = @abort END IF @abort = 0 BEGIN EXEC proc_NavStructLogChangesAndUpdateSiteChangedTime @wssp7, @wssp8, NULL END IF @abort <> 0 BEGIN ROLLBACK TRAN END ELSE BEGIN COMMIT TRAN END IF @abort = 0 BEGIN EXEC proc_UpdateDiskUsed
'6323DF8A-5C57-4D3E-A477-09AA8B66100A' END ' CommandType: Text CommandTimeout: 0 Parameter: '@wssp0' Type: UniqueIdentifier Size: 0 Direction: Input Value: '6323df8a-5c57-4d3e-a477-09aa8b66100a' Parameter: '@wssp1'
Type: UniqueIdentifier Size: 0 Direction: Input Value: '7ae114df-9d52-4b08-affa-8c544cbc27b6' Parameter: '@wssp2' Type: Int Size: 0 Direction: Input Value: '1' Parameter: '@wssp3' Type: Int Size: 0 Direction: Input Value: '2072'
Parameter: '@wssp4' Type: Int Size: 0 Direction: Output Value: '2072' Parameter: '@wssp5' Type: Int Size: 0 Direction: Output Value: '0' Parameter: '@wssp6' Type: Int Size: 0 Direction: Output Value: '10006'
Parameter: '@wssp7' Type: UniqueIdentifier Size: 0 Direction: Input Value: '6323df8a-5c57-4d3e-a477-09aa8b66100a' Parameter: '@wssp8' Type: UniqueIdentifier Size: 0 Direction: Input Value: '7ae114df-9d52-4b08-affa-8c544cbc27b6'
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database aek90
High SecurityOnOperationCheck = True
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database d0d6
High System.Data.SqlClient.SqlException (0x80131904): Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_PK'. The duplicate key value is (6323df8a-5c57-4d3e-a477-09aa8b66100a, 7ae114df-9d52-4b08-affa-8c544cbc27b6,
1000). The statement has been terminated. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject
stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj,
Boolean& dataReady) at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior
runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior
cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
behavior) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, CommandBehavior behavior, SqlQueryData monitoringData, Boolean retryForDeadLock) at Microsoft.SharePoint.SPSqlClient.ExecuteQueryInternal(Boolean
retryfordeadlock) at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadlock) ClientConnectionId:2bb4004c-aa75-470e-b11e-dbf1c476aaed
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database ad194
High ExecuteQuery failed with original error 0x80131904
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Database 8z23
Unexpected Unexpected query execution failure in navigation query, HResult -2146232060. Query text (if available): "BEGIN TRAN DECLARE @abort int SET @abort = 0 DECLARE @EidBase int,@EidHome int SET @EidBase
= 0 SET @EidHome = NULL IF @abort = 0 BEGIN EXEC @abort = proc_NavStructAllocateEidBlockWebId @wssp0, @wssp1, @wssp2, @wssp3, @EidBase OUTPUT SELECT @wssp4 = @EidBase, @wssp5 = @abort END IF @abort = 0 BEGIN EXEC @abort = proc_NavStructAddNewNodeByUrl '6323DF8A-5C57-4D3E-A477-09AA8B66100A','7AE114DF-9D52-4B08-AFFA-8C544CBC27B6',1,2072,-1,0,N'sites/impact/default.aspx',N'PersonSøk',N'PersonSøk',NULL,0,0,0,NULL,@EidBase,@EidHome
OUTPUT SELECT @wssp6 = @abort END IF @abort = 0 BEGIN EXEC proc_NavStructLogChangesAndUpdateSiteChangedTime @wssp7, @wssp8, NULL END IF @abort <> 0 BEGIN ROLLBACK TRAN END ELSE BEGIN COMMIT TRAN END IF @abort = 0 BEGIN EXEC proc_UpdateDiskUsed
'6323DF8A-5C57-4D3E-A477-09AA8B66100A' END "
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
General 8kh7
High <nativehr>0x8107140d</nativehr><nativestack></nativestack>An unexpected error occurred while manipulating the navigational structure of this Web.
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
General aix9j
High SPRequest.AddNavigationNode: UserPrincipalName=i:0).w|s-1-5-21-2030300366-1823906440-2562684930-2106, AppPrincipalName= ,bstrUrl=http://im-sp1/sites/impact ,bstrName=PersonSøk ,bstrNameResource=<null> ,bstrNodeUrl=/sites/impact/default.aspx
,lType=0 ,lParentId=2072 ,lPreviousSiblingId=-1 ,bAddToQuickLaunch=False ,bAddToSearchNav=False
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
General ai1wu
Medium System.Runtime.InteropServices.COMException: <nativehr>0x8107140d</nativehr><nativestack></nativestack>An unexpected error occurred while manipulating the navigational structure of this
Web., StackTrace: at Microsoft.SharePoint.Navigation.SPNavigationNode.AddInternal(Int32 iPreviousNodeId, Int32 iParentId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav) at Microsoft.SharePoint.Navigation.SPNavigationNodeCollection.AddInternal(SPNavigationNode
node, Int32 iPreviousNodeId) at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.<ConfigureQuickLaunchBar>b__0() at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass5.<RunWithElevatedPrivileges>b__3()
at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode) at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback secureCode, Object param) at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated
secureCode) at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.ConfigureQuickLaunchBar() at ImpactSharePoint.Features.Pages.PagesEventReceiver.FeatureActivated(SPFeatureReceiverProperties properties)
at Microsoft.SharePoint.SPFeature.DoActivationCallout(Boolean fActivate, Boolean fForce) at Microsoft.SharePoint.SPFeature.Activate(SPSite siteParent, SPWeb webParent, SPFeaturePropertyCollection props, SPFeatureActivateFlags activateFlags, Boolean
fForce) at Microsoft.SharePoint.SPFeatureCollection.AddInternal(SPFeatureDefinition featdef, Version version, SPFeaturePropertyCollection properties, SPFeatureActivateFlags activateFlags, Boolean force, Boolean fMarkOnly) at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtWeb(Boolean
fActivate, Boolean fEnsure, Guid featid, SPFeatureDefinition featdef, String urlScope, String sProperties, Boolean fForce) at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtScope(Boolean fActivate, Guid featid, SPFeatureDefinition
featdef, String urlScope, Boolean fForce) at Microsoft.SharePoint.PowerShell.SPCmdletEnableFeature.UpdateDataObject() at Microsoft.SharePoint.PowerShell.SPCmdlet.ProcessRecord() at System.Management.Automation.CommandProcessor.ProcessRecord()
at System.Management.Automation.CommandProcessorBase.DoExecute() at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate) at System.Management.Automation.PipelineOps.InvokePipeline(Object
input, Boolean ignoreInput, CommandParameterInternal[][] pipeElements, CommandBaseAst[] pipeElementAsts, CommandRedirection[][] commandRedirections, FunctionContext funcContext) at System.Management.Automation.Interpreter.ActionCallInstruction`6.Run(InterpretedFrame
frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.Interpreter.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.LightLambda.RunVoid1[T0](T0 arg0) at System.Management.Automation.DlrScriptCommandProcessor.RunClause(Action`1
clause, Object dollarUnderbar, Object inputToProcess) at System.Management.Automation.CommandProcessorBase.DoComplete() at System.Management.Automation.Internal.PipelineProcessor.DoCompleteCore(CommandProcessorBase commandRequestingUpstreamCommandsToStop)
at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate) at System.Management.Automation.Runspaces.LocalPipeline.InvokeHelper()
at System.Management.Automation.Runspaces.LocalPipeline.InvokeThreadProc() at System.Management.Automation.Runspaces.PipelineThread.WorkerProc() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext,
ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext
executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
03.12.2014 15:21:25.45 PowerShell.exe (0x1620)
0x11F4 SharePoint Foundation
Feature Infrastructure 88jm
High Feature receiver assembly 'ImpactSharePoint, Version=1.1.0.0, Culture=neutral, PublicKeyToken=3f4d824fecc0071e', class 'ImpactSharePoint.Features.Pages.PagesEventReceiver', method 'FeatureActivated' for feature
'd8aabd95-076a-4650-a8a6-0aa5bd8ac8d1' threw an exception: Microsoft.SharePoint.SPException: An unexpected error occurred while manipulating the navigational structure of this Web. ---> System.Runtime.InteropServices.COMException: <nativehr>0x8107140d</nativehr><nativestack></nativestack>An
unexpected error occurred while manipulating the navigational structure of this Web. at Microsoft.SharePoint.Library.SPRequestInternalClass.AddNavigationNode(String bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32
lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified) at Microsoft.SharePoint.Library.SPRequest.AddNavigationNode(String bstrUrl, String bstrName, String bstrNameResource,
String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav, String& pbstrDateModified) --- End of inner exception stack trace --- at Microsoft.SharePoint.SPGlobal.HandleComException(COMException
comEx) at Microsoft.SharePoint.Library.SPRequest.AddNavigationNode(String bstrUrl, String bstrName, String bstrNameResource, String bstrNodeUrl, Int32 lType, Int32 lParentId, Int32 lPreviousSiblingId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav,
String& pbstrDateModified) at Microsoft.SharePoint.Navigation.SPNavigationNode.AddInternal(Int32 iPreviousNodeId, Int32 iParentId, Boolean bAddToQuickLaunch, Boolean bAddToSearchNav) at Microsoft.SharePoint.Navigation.SPNavigationNodeCollection.AddInternal(SPNavigationNode
node, Int32 iPreviousNodeId) at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.<ConfigureQuickLaunchBar>b__0() at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass5.<RunWithElevatedPrivileges>b__3()
at Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode) at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback secureCode, Object param) at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated
secureCode) at ImpactSharePoint.ConfigureSharePointInstance.NavigationConfig.ConfigureQuickLaunchBar() at ImpactSharePoint.Features.Pages.PagesEventReceiver.FeatureActivated(SPFeatureReceiverProperties properties)
at Microsoft.SharePoint.SPFeature.DoActivationCallout(Boolean fActivate, Boolean fForce)
5b7e05f7-49df-42ca-b7c9-8ae5b06b464f
The code:
using System.Diagnostics;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Navigation;
namespace ImpactSharePoint.ConfigureSharePointInstance
public class NavigationConfig
public void ConfigureQuickLaunchBar(SPWeb web)
SPSecurity.RunWithElevatedPrivileges(delegate
//Delete All Links
web.AllowUnsafeUpdates = true;
for (int i = web.Navigation.QuickLaunch.Count - 1; i > -1; i--)
web.Navigation.QuickLaunch[i].Delete(); // -> Deleting all the links in quick launch
web.QuickLaunchEnabled = true;
EventLog.WriteEntry("Sharepointfeature","Starter");
//Adding Links
SPNavigationNodeCollection nodes = web.Navigation.QuickLaunch;
var sokNode = new SPNavigationNode("Søk", null, false);
nodes.AddAsFirst(sokNode);
sokNode.Update();
EventLog.WriteEntry("Sharepointfeature", "Lagt til søk");
// Personsøk
var personSokNode = new SPNavigationNode("Deltakere", "/sites/impact/default.aspx", false); //TODO: fix hardkoding
sokNode.Children.AddAsFirst(personSokNode);
personSokNode.Update();
EventLog.WriteEntry("Sharepointfeature", "Lagt til node personsøk");
// Udb-søk
var udbSokNode = new SPNavigationNode("UDB-Søk", "/sites/impact/udbsok.aspx", false); //TODO: fix hardkoding
sokNode.Children.AddAsLast(udbSokNode);
udbSokNode.Update();
EventLog.WriteEntry("Sharepointfeature", "Lagt til node udbsøk");
// Kommuner
var kommuneNode = new SPNavigationNode("Kommuner", "/sites/impact/kommunesearch.aspx", false); //TODO: fix hardkoding
sokNode.Children.AddAsFirst(kommuneNode);
kommuneNode.Update();
EventLog.WriteEntry("Sharepointfeature", "Lagt til node kommunesøk");
// Organisasjoner
var virksomhetNode = new SPNavigationNode("Organisasjoner", "/sites/impact/virksomhetsearch.aspx", false); //TODO: fix hardkoding
sokNode.Children.AddAsFirst(virksomhetNode);
virksomhetNode.Update();
EventLog.WriteEntry("Sharepointfeature", "Lagt til node kommunesøk");
// NIR
var nirNode = new SPNavigationNode("Nir", null, false); //TODO: fix hardkoding
nodes.AddAsLast(nirNode);
nirNode.Update();
EventLog.WriteEntry("Sharepointfeature", "Lagt til node Nir");
// Tilskudd
var tilskuddNode = new SPNavigationNode("Tilskudd", "/sites/impact/", false);
nodes.AddAsLast(tilskuddNode);
tilskuddNode.Update();
EventLog.WriteEntry("Sharepointfeature", "Lagt til node Tilskudd");
// Kjøringsoversikt
var showkjoringerNode = new SPNavigationNode("Kjøringsoversikt", "/sites/dev/kjoringoversikt.aspx", false); //TODO: fix hardkoding
tilskuddNode.Children.AddAsFirst(showkjoringerNode);
showkjoringerNode.Update();
EventLog.WriteEntry("Sharepointfeature", "Lagt til node showkjoringerNode");
web.Update();
EventLog.WriteEntry("Sharepointfeature", "Etter update");
//Setting homepage
SPFolder folder = web.RootFolder;
folder.WelcomePage = "default.aspx";
folder.Update();
EventLog.WriteEntry("Sharepointfeature", "Etter homepage");
web.AllowUnsafeUpdates = false;i think you need to debug your code, lookalike the values you trying insert already exist in the database.
these two IDs (6323df8a-5c57-4d3e-a477-09aa8b66100a, 7ae114df-9d52-4b08-affa-8c544cbc27b6).
i would try to run the select command against the content db.
SELECT TOP * FROM [DB Name].[dbo].[NavNodes] where id = '6323df8a-5c57-4d3e-a477-09aa8b66100a'
Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog -
I am using SQL server 2008 R1 SP3. And when we are doing back up operations we are facing the below error
Msg 2601, Level 14, State 1, Procedure sp_flush_commit_table, Line 15
Cannot insert duplicate key row in object 'sys.syscommittab' with unique index 'si_xdes_id'. The
duplicate key value is (2238926153).
The statement has been terminated.
Please assist me with your inputs.
Thanks,
Rakesh.Hello,
Did you enable change tracking on the database? If so, please try to disable and re-enable the change tracking.
The following thread is about the similar issue, please refer to:
http://social.msdn.microsoft.com/forums/sqlserver/en-US/c2294c73-4fdf-46e9-be97-8fade702e331/backup-fails-after-installing-sql2012-sp1-cu1-build-3321
Regards,
Fanny Liu
Fanny Liu
TechNet Community Support -
Hi All
I am getting the below error when modifying the navigation in one of the SharePoint 2010 site.
Error
An unexpected error occured while manipulating the navigational structure of this Web.
Troubleshoot issues with Microsoft SharePoint Foundation.
Correlation ID: b9cb4f2e-cd06-4d77-b999-272a881a2905
The SP log:
System.Data.SqlClient.SqlException: Cannot insert duplicate key row in object 'dbo.NavNodes' with unique index 'NavNodes_AltPK'. The duplicate key value is (536677da-c0aa-41c8-991a-9ccf01d84b29, 4d9a2738-5c1d-4ad0-fcaf-9179e4230fg0, 1025,
13). The statement has been terminated. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject
stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Dat... b9cb4f2e-cd06-4d77-b999-272a881a2905
...a.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean
returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior
cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
behavior) at Microsoft.SharePoint.Utilities.SqlSe... b9cb4f2e-cd06-4d77-b999-272a881a2905
Can someone help me.
MercuryManWhat build of SharePoint are you running. The error is similar to:
http://blogs.msdn.com/b/joerg_sinemus/archive/2013/02/12/february-2013-sharepoint-2010-hotfix.aspx
Trevor Seward, MCC
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Select count from large fact tables with bitmap indexes on them
Hi..
I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
ThxYou may have corruption in the index if the queries ...
Select /*+ full(t) */ count(*) from my_table t
... and ...
Select /*+ index_combine(t my_index) */ count(*) from my_table t;
... give different results.
Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them. -
How to insert a table with variable rows in smart form
Hi all,
How to insert a table with variable rows in smart form?
Any help would be appreciated.
Regards,
Mahesh.Hi,
Right click the mouse->create->table
If you want 5 columns, you need to declare 5 cells in one line type of the table
Click on Table -> Details, then do the following
Line Type 1 2 3 4 5
L1 2mm 3mm etc
Here specify the width of the columns as many as you want..
then in the header/main area of the table, click create Table Line, Rowtype is L1, automatically 5 cells will come,In each cell create a text element, display the variable to be printed there. -
ORA-00604 ORA-00904 When query partitioned table with partitioned indexes
Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
Query runs fine when query the partitioned table without partitioned indexes.
Here is the query.
SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
al27.accessory_code
FROM vlc.veh_vdc_accessorization_fact al1,
vlc.vdc_dim al2,
vlc.model_attribute_dim al7,
vlc.ppo_list_dim al18,
vlc.ppo_list_indiv_type_dim al23,
vlc.accy_type_dim al27
WHERE ( al2.vdc_id = al1.vdc_location_id
AND al7.model_attribute_id = al1.model_attribute_id
AND al18.mydppolist_id = al1.ppo_list_id
AND al23.mydppolist_id = al18.mydppolist_id
AND al23.mydaccytyp_id = al27.mydaccytyp_id
AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
AND al2.vdc_name IN
('PORT OF BALTIMORE',
'PORT OF JACKSONVILLE - LEXUS',
'PORT OF LONG BEACH',
'PORT OF NEWARK',
'PORT OF PORTLAND'
AND al27.accessory_code IN ('42', '43', '44', '45')
GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_codeI would recommend that you post this at the following OTN forum:
Database - General
General Database Discussions
and perhaps at:
Oracle Warehouse Builder
Warehouse Builder
The Oracle OLAP forum typically does not cover general data warehousing topics. -
Hi,
I am getting following warnings in db02:
Tables without unique index
STATS_RFC
STATS_RFC_OLD.
In se16 the status sows Table STATS_RFC_OLD\ STATS_RFC is not active in the Dictionary. In se11 does not exist.
Kindly suggest.
Regards,
Rahul.HI,
desc SAPR3P.STATS_RFC
Name Null? Type
STATID VARCHAR2(30)
TYPE CHAR(1)
VERSION NUMBER
FLAGS NUMBER
C1 VARCHAR2(30)
C2 VARCHAR2(30)
C3 VARCHAR2(30)
C4 VARCHAR2(30)
C5 VARCHAR2(30)
N1 NUMBER
N2 NUMBER
N3 NUMBER
N4 NUMBER
N5 NUMBER
N6 NUMBER
N7 NUMBER
N8 NUMBER
N9 NUMBER
N10 NUMBER
N11 NUMBER
N12 NUMBER
D1 DATE
R1 RAW(32)
R2 RAW(32)
CH1 VARCHAR2(1000)
No fields in STATS_RFC_OLD.
Regards,
Rahul. -
How to optimize massive insert on a table with spatial index ?
Hello,
I need to implement a load process for saving up to 20 000 points per minutes in Oracle 10G R2.
These points represents car locations tracked by GPS and I need to store at least all position from the past 12 hours.
My problem is that the spatial index is very costly during insert (For the moment I do only insertion).
My several tries for the insertion by :
- Java and PreparedStatement.executeBatch
- Java and generation a SQLLoader file
- Java and insertion on view with a trigger "instead of"
give me the same results... (not so good)
For the moment, I work on : DROP INDEX, INSERT, CREATE INDEX phases.
But is there a way to only DISABLE INDEX and REBUILD INDEX only for inserted rows ?
I used the APPEND option for insertion :
INSERT /*+ APPEND */ INTO MY_TABLE (ID, LOCATION) VALUES (?, MDSYS.SDO_GEOMETRY(2001,NULL,MDSYS.SDO_POINT_TYPE(?, ?, NULL), NULL, NULL))
My spatial index is created with the following options :
'sdo_indx_dims=2,layer_gtype=point'
Is there a way to optimize these heavy load ???
What about the PARALLEL option and how does it work ? (Not so clear for me regarding the documentation... I am not a DBA)
Thanks in advancedIt is possible to insert + commit 20000 points in 16 seconds.
select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
drop table testpoints;
create table testpoints
( point mdsys.sdo_geometry);
delete user_sdo_geom_metadata
where table_name = 'TESTPOINTS'
and column_name = 'POINT';
insert into user_sdo_geom_metadata values
('TESTPOINTS'
,'POINT'
,sdo_dim_array(sdo_dim_element('X',0,1000,0.01),sdo_dim_element('Y',0,1000,0.01))
,null)
create index testpoints_i on testpoints (point)
indextype is mdsys.spatial_index parameters ('sdo_indx_dims=2,layer_gtype=point');
insert /*+ append */ into testpoints
select (sdo_geometry(2001,null,sdo_point_type(1+ rownum / 20, 1 + rownum / 50, null),null,null))
from all_objects where rownum < 20001;
Duration: 00:00:10.68 seconds
commit;
Duration: 00:00:04.96 seconds
select count(*) from testpoints;
COUNT(*)
20000 The insert of 20 000 rows takes 11 seconds, the commit takes 5 seconds.
In this example there is no data traffic between the Oracle database and a client but you have 60 -16 = 44 seconds to upload your points into a temporary table. After uploading in a temporary table you can do:
insert /*+ append */ into testpoints
select (sdo_geometry(2001,null,sdo_point_type(x,y, null),null,null))
from temp_table;
commit;Your insert ..... values is slow, do some bulk processing.
I think it can be done, my XP computer that runs my database isn't state of the art. -
Insertion in Table with Virtual Column
Hi,
I am using 11.1.0.7.0 on Solaris 10.
I created following table:
test@mytest> create table mytest (c1 number, c2 number generated always as (1) virtual);
Table created.
test@mytest> create unique index idx on mytest(c2);
Index created.
test@mytest> insert into mytest values(1);
insert into mytest values(1)
ERROR at line 1:
ORA-00947: not enough values
test@mytest> Why it is not letting me insert into the table, because we cannot insert value in a virtual column?
regards
Edited by: Panicked DBA on Aug 28, 2010 3:59 AMThis works a little bit better but not really as expected:
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL>
SQL> drop table mytest purge;
Table dropped.
SQL>
SQL> create table mytest (c1 number, c2 number generated always as (1) virtual);
Table created.
SQL> create unique index idx on mytest(c2);
Index created.
SQL> insert into mytest(c1) values(1);
1 row created.
SQL> commit;
Commit complete.
SQL> set null IS_NULL
SQL> select c1, c2 from mytest ;
C1 C2
1It looks like there is a bug if you specify C2 NUMBER data type:
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL>
SQL> drop table mytestok purge;
Table dropped.
SQL> drop table mytestko purge;
Table dropped.
SQL>
SQL> create table mytestok (c1 number, c2 generated always as (1) virtual);
Table created.
SQL> insert into mytestok(c1) values(1);
1 row created.
SQL> commit;
Commit complete.
SQL> select c1, c2 from mytestok ;
C1 C2
1 1
SQL> select * from mytestok where c2 = 1;
C1 C2
1 1
SQL>
SQL> create table mytestko (c1 number, c2 number generated always as (1) virtual);
Table created.
SQL> insert into mytestko(c1) values(1);
1 row created.
SQL> commit;
Commit complete.
SQL> set null IS_NULL
SQL> select c1, c2 from mytestko ;
C1 C2
1
SQL> select * from mytestko where c2 = 1;
no rows selected
SQL> exit
Maybe you are looking for
-
How can I use a 'Scribe' type dictation pedal with Quicktime.
I use the programs 'Scribe' and 'Express Scribe' to do dictation. Recently one client, who records with an IPhone, started sending me files that will only open in Quicktime. (Did he 'upgrade' software? Not sure... but the files worked in Express Sc
-
How to start the server in debug mode
HI Im working on weblogic server 8.1 sp. My requirement is to run the application in hosted on weblogic server with out restaring the server i.e dynamiically turn to debug mode with out stopping. IM using Log4j method for logging. please let me know
-
Table entries deleted in Customized table.
Respected Guru's table entries in a customized table were deleted in production system, no transports pertaining it was found. Table entries were updated using bapi. Please help me to know how the table entries were deleted. Daya. Edited by: Dayanana
-
Should shoot in 60i or PF30?
I have a Canon Vixia HF200 camcorder. I bought it because I've seen some footage shot with it and it was very very good quality. Also CNET.com had picked it as the best budget camcorder. However, so far, I'm not really satisfied with the quality of m
-
Importing albums from Elements 9 on a PC to LR4 on a Mac
How can I import my Elements 9 albums from my PC to LR4 on a MacBookPro? I have tried creating a backup of my Elements catalogue but cannot figure out how to import it into LR4.