SQL Server Extended Events Capturing DDL Schema Changes gives "NO Column Name" Value on DDL events being performed .

Team ,
I've created a session to Capture object_Created , Object_altered and Object_deleted events on to Ring Buffeer target "IN Memory Storage" on a server .Whenever i create alter or drop an object , I see a Result in SSMS Column Name "No Column
Name" and  value "1" .
Is there a way we can eradicate this ?
Thanks in advance for your tip !
Rajkumar Yelugu

Thoughts ? Thanks for your time !
Rajkumar Yelugu

Similar Messages

  • Selective XML Index feature is not supported for the current database version , SQL Server Extended Events , Optimizing Reading from XML column datatype

    Team , Thanks for looking into this  ..
    As a last resort on  optimizing my stored procedure ( Below ) i wanted to create a Selective XML index  ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
    Index feature is not supported for the current database version.. How ever
    EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
    Is there ANY alternative way i can optimize below stored proc ?
    Thanks in advance for your response(s) !
    /****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- EXEC [dbo].[MN_Process_DDLSchema_Changes]
    ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
    AS
    BEGIN
    SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    select getdate() as getdate_0
    DECLARE @XML XML , @Prev_Insertion_time DATETIME
    -- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
    SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
    -- PRINT '1'
    CREATE TABLE #Temp
    EventName VARCHAR(100),
    Time_Stamp_EE DATETIME,
    ObjectName VARCHAR(100),
    ObjectType VARCHAR(100),
    DbName VARCHAR(100),
    ddl_Phase VARCHAR(50),
    ClientAppName VARCHAR(2000),
    ClientHostName VARCHAR(100),
    server_instance_name VARCHAR(100),
    ServerPrincipalName VARCHAR(100),
    nt_username varchar(100),
    SqlText NVARCHAR(MAX)
    CREATE TABLE #XML_Hold
    ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
    BufferXml XML
    select getdate() as getdate_01
    INSERT INTO #XML_Hold (BufferXml)
    SELECT
    CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
    FROM sys.dm_xe_session_targets xet
    INNER JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
    WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
    --RETURN
    --SELECT * FROM #XML_Hold
    select getdate() as getdate_1
    -- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
    CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
    FOR
    PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
    --RETURN
    --CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
    --SELECT GETDATE() AS GETDATE_2
    -- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
    --CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
    --USING XML INDEX [IX_XML_Hold]
    ---- FOR VALUE
    -- --FOR PROPERTY
    -- FOR PATH
    --SELECT GETDATE() AS GETDATE_3
    --PRINT '2'
    -- RETURN
    SELECT GETDATE() GETDATE_3
    INSERT INTO #Temp
    EventName ,
    Time_Stamp_EE ,
    ObjectName ,
    ObjectType,
    DbName ,
    ddl_Phase ,
    ClientAppName ,
    ClientHostName,
    server_instance_name,
    nt_username,
    ServerPrincipalName ,
    SqlText
    SELECT
    p.q.value('@name[1]','varchar(100)') AS eventname,
    p.q.value('@timestamp[1]','datetime') AS timestampvalue,
    p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
    p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
    p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
    p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
    p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
    p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
    p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
    p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
    p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
    p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
    FROM #XML_Hold
    CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
    WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
    p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
    AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
    AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
    SELECT GETDATE() GETDATE_4
    -- SELECT * FROM #TEMP
    -- SELECT COUNT(*) FROM #TEMP
    -- SELECT GETDATE()
    -- RETURN
    -- PRINT '3'
    --RETURN
    INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
    [UserName]
    ,[DbName]
    ,[ObjectName]
    ,[client_app_name]
    ,[ClientHostName]
    ,[ServerName]
    ,[SQL_TEXT]
    ,[EE_Time_Stamp]
    ,[Event_Name]
    SELECT
    CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
    ELSE T.nt_username
    END
    ,T.DbName
    ,T.objectname
    ,T.clientappname
    ,t.ClientHostName
    ,T.server_instance_name
    ,T.sqltext
    ,T.Time_Stamp_EE
    ,T.eventname
    FROM
    #TEMP T
    /** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
    -- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
    WHERE ddl_Phase ='Commit'
    AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
    AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
    WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
    AND MN.[DbName] = T.DbName
    AND MN.[Event_Name] = T.EventName
    AND MN.[ObjectName]= T.ObjectName
    AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
    AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
    -- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
    --SELECT GETDATE()
    --PRINT '4'
    --RETURN
    SELECT
    top 100
    [EE_Time_Stamp]
    ,[ServerName]
    ,[DbName]
    ,[Event_Name]
    ,[ObjectName]
    ,[UserName]
    ,[SQL_TEXT]
    ,[client_app_name]
    ,[Created_Date]
    ,[ClientHostName]
    FROM
    [dbo].[MN_DDLSchema_Changes_log]
    ORDER BY [EE_Time_Stamp] desc
    -- select getdate()
    -- ** DELETE EVENTS after logging into Physical table
    -- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
    -- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
    -- SELECT @XML
    SELECT GETDATE() GETDATE_5
    END
    GO
    Rajkumar Yelugu

    @@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
        May 14 2014 18:34:29
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    (1 row(s) affected)
    Compatibility level is set to 110 .
    One of the limitation states - XML columns with a depth of more than 128 nested nodes
    How do i verify this ? Thanks .
    Rajkumar Yelugu

  • Microsoft sql server extended event log file

    Dears
    Sorry for my below questions if it is very beginner level.
    In my implementation I have cluster SQL 2012 on Windows 2012; I am using MountPoints since I have many Clustered Disks.
    My MountPoint Size is only 3 GB; My Extended event log are growing fast and it is storing in the MountPoint Drive directly (Path: F:\MSSQL11.MSSQLSERVER\MSSQL\Log).
    What is the best practice to work with it? (is it to keep all Extended events? or recirculate? or to shrink? or to store in DB?)
    Is there any relation between SQL truncate and limiting the size of Extended event logs?
    How can I recirculate this Extended Events?
    How can I change the default path?
    How can I stop it?
    and in case I stop it, does this means to stop storing SQL event in Windows event Viewer?
    Thank you

    After a lot of checking, I have found below:
    My Case:
    I am having SQL Failover Cluster Instances "FCI" and I am using Mount-Points to store my Instances.
    I am having 2 Passive Copies for each FCI.
    In my configuration I choose to store the Root Instance which include the logs on Mount-Point.
    My Mount Point is 2 GB Only, which became full after few days of deployment.
    Light Technical Information:
    The Extended Event Logs files are generated Coz I have FCI, in single SQL Installation you will not find this files.
    The File Maximum size will be 100 MB.
    The Files start circulating after it become 10 Full Files.
    If you have the FCI installed as 1 Active 2 Passive, and you are doing failover between the nodes, then you will expect to see around 14 - 30 copy of this file.
    Based on above information you will need to have around 100 MB * 10 Files Per Instance copy * 3 Since in my case I have 1 Active and 2 passive instances which will = 3000 MB
    So in my case My Mount-Point was 2 GB, which become full coz of this SQLDIAG Logs.
    Solution:
    I extended my mount point by 3 GB coz I am storing this logs on it.
    In case you will need to change SQLDIAG Extended Logs Size to 50 MB for example and place to F:\Logs, then you will need below commands:
    ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG OFF;
    ALTER SERVER CONFIGURATION
    SET DIAGNOSTICS LOG MAX_SIZE = 50 MB;
    ALTER SERVER CONFIGURATION
    SET DIAGNOSTICS LOG PATH = 'F:\logs';
    ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG ON;
    After that you will need to restart the FCI from SQL Server Configuration Manager or Failover Cluster Manager.
    I wish you will find this information helpful if it is your case.
    Regards

  • OGG for SQL Server - Extract stops capturing - Bug?

    Hi, all,
    I've found a problem with OGG for SQL Server where the Extract stops capturing data after the transaction log is backed up. I've looked for ways to reconfigure OGG to avoid the problem but couldn't find any reference to options to workaround this problem. It seems to be a bug to me.
    My Extract configuration is as follows:
    EXTRACT ext1
    SOURCEDB mssql1
    TRANLOGOPTIONS NOMANAGESECONDARYTRUNCATIONPOINT
    EOFDELAY 60
    EXTTRAIL dirdat/e1
    TABLE dbo.TestTable;
    I'm using the EOFDELAY parameter for testing purposes only, since it's easy to reproduce the scenario that causes the issue when the extract polling is configured with longer intervals.
    When the Transaction Log backup runs, SQL Server marks all the virtual logs that are older than the primary and secondary truncation points as inactive (status = 0). These virtual logs can then be reused if required. They still contain change records, though, and OGG can read from then if required, before they are overwritten. This situation will never occur if we are not using SQL Replication and have the Extract configured with the parameter MANAGESECONDARYTRUNCATIONPOINT.
    However, I'm trying to simulate a scenario where OGG is used along SQL Replication and the extract is configured with the NOMANAGESECONDARYTRUNCATIONPOINT option. The situation that I've reproduced and caused the Extract to stop capturing is the follow sequence of events:
    1. Extract reads transaction log and capture change up to LSN X
    2. More change are made to the database and the LSN is incremented
    3. Log Reader reads Transaction Log, captures changes up to LSN X+Y and advances the secondary truncation point to that LSN
    4. A transaction log occurs, backs up all the active virtual logs, advances the primary truncation point to a LSN greater than LSN X+Y, and marks all the virtual logs with LSNs <= X+Y as inactive (status = 0)
    5. Changes continue to happen in the database consuming all the available inactive virtual logs and overwriting them.
    6. The extract wakes up again to capture more changes.
    At this point, the changes between LSNs X and X+Y are not in the Transaction Log anymore, but are available in the backups. From what I understood in the documentation the Extract should detect that situation and retrieve the changes from the Transaction Log backups. This, however, is not happening and the Extract becomes stuck. It still pools the transaction log at the configured interval query the log state with DBCC LOGINFO, but doesn't move forward anymore.
    If I stop and restart the Extract I can see from the trace that it does the right thing upon startup. It realises that it requires information that's missing from the logs, query MSDB for the available backups, and mine the backups to get the required LSNs.
    I would've thought the Extract should do the same during normal operation, without the need for a restart.
    Is this a bug or the normal operation of the Extract? Is there a way to configure it to avoid this situation without using NOMANAGESECONDARYTRUNCATIONPOINT?
    The following is the state of the Extract once it gets stuck. The last replicated change occurred at 2012-07-09 12:46:50.370000. All the changes after that, and there are many, were not captured until I restarted the Extract.
    GGSCI> info extract ext1, showch
    EXTRACT EXT1 Last Started 2012-07-09 12:32 Status RUNNING
    Checkpoint Lag 00:00:00 (updated 00:00:54 ago)
    VAM Read Checkpoint 2012-07-09 12:46:50.370000
    LSN: 0x0000073d:00000aff:0001, Tran: 0000:000bd922
    Current Checkpoint Detail:
    Read Checkpoint #1
    VAM External Interface
    Startup Checkpoint (starting position in the data source):
    Timestamp: 2012-07-09 11:41:06.036666
    LSN: 0x00000460:00000198:0004, Tran: 0000:00089b02
    Recovery Checkpoint (position of oldest unprocessed transaction in the data so
    urce):
    Timestamp: 2012-07-09 12:46:50.370000
    LSN: 0x0000073d:00000afd:0004, Tran: 0000:000bd921
    Current Checkpoint (position of last record read in the data source):
    Timestamp: 2012-07-09 12:46:50.370000
    LSN: 0x0000073d:00000aff:0001, Tran: 0000:000bd922
    Write Checkpoint #1
    GGS Log Trail
    Current Checkpoint (current write position):
    Sequence #: 14
    RBA: 28531192
    Timestamp: 2012-07-09 12:50:02.409000
    Extract Trail: dirdat/e1
    CSN state information:
    CRC: D2-B6-9F-B0
    CSN: Not available
    Header:
    Version = 2
    Record Source = A
    Type = 8
    # Input Checkpoints = 1
    # Output Checkpoints = 1
    File Information:
    Block Size = 2048
    Max Blocks = 100
    Record Length = 20480
    Current Offset = 0
    Configuration:
    Data Source = 5
    Transaction Integrity = 1
    Task Type = 0
    Status:
    Start Time = 2012-07-09 12:32:29
    Last Update Time = 2012-07-09 12:50:02
    Stop Status = A
    Last Result = 400
    Thanks!
    Andre

    It might be something simple (or maybe not); but the best/fastest way to troubleshoot this would be to have Oracle (GoldenGate) support review your configuration. There are a number of critical steps required to allow GG to interoperate with MS's capture API. (I doubt this is it, but is your TranLogOptions on one line? It looks like you have it on two , the way it's formatted here.)
    Anyway, GG support has seen it all, and can probably wrap this up quickly. (And if it was something simple -- or even a bug -- do post back here & maybe someone else can benefit from the solution.)
    Perhaps someone else will be able to provide a better answer, but for the most part troubleshooting this (ie, sql server) via forum tends to be a bit like doing brain surgery blindfolded.

  • Procedure not selecting SQL Server database specified in Physical Schema

    Hi all,
    I'm still green with ODI and couldn't find an answer to this after searching through the forum. I have 10.1.3.6.2 installed right now.
    I have my Physical Architecture setup with Microsoft SQL Server, Essbase, and some others. The SQL Server connection is JDBC, using integrated security (I don't think integrated security is supported but I got it to work, although I had to hack up the 10.1.3.6 upgrade script to get the patch to work...)
    Under the Physical Architecture I have several data servers. Right now the only field I specify is the Database (Catalog) and a context -- everything else is standard (even Owner (Schema) is just <Undefined> right now but changing it doesn't seem to make a difference right now)
    So, I have my job all designed up in Designed and can run it, and everything works just fine, except one little snag. The database in SQL Server doesn't seem to get selected automatically (by virtue of being specified in the Physical schema). However, if I put a step in to the job so that it exectures "USE database_name", then everything works.
    Normally, I could live with this, but the databases are named differently on different servers, so I'd prefer to just get it to work from the physical schema instead of jumping through some hoops with variables.
    Also, while I thought about specifying the database on the JDBC URL (databaseName=X), this won't work (it's probably poor form anyway) either.
    So... shouldn't specifying the database/catalog in the physical schema make it so that it gets selected as default and then SQL queries are running against that?
    Thanks for any help/insight, as always,
    Jason

    Jason
    Using the database option in the URL is the right way to do it, the URL I use is:
    jdbc:sqlserver://myserver:1433;database=ODIM;selectMethod=cursor
    Craig

  • SQL Server Agent Jobs error for Slowly changing dimension

    Hi,
    I have implemented Slowly changing dimension in 5 of my packages for lookup insert/update.
    All the packages are running good in SSDT. And when i deployed the project to SSISDB and run the packages all are running successfully. But when i created a job out of that and run the packages, then 3 packages ran successfully and 2 packages failed. 
    When i opened All Execution Report. I found the following error:
    Message
    Message Source Name
    Subcomponent Name
    Process Provider:Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description:
    "Login timeout expired". An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description: "A network-related or instance-specific error has occurred while establishing
    a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". An OLE DB record is available. 
    Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description: "Named Pipes Provider: Could not open a connection to SQL Server [53]. ".
    Process Provider
    Slowly Changing Dimension [212]
    Then i opened Provider package in SSDT and changed the source reading record limit from 4,00,000 to 15,000 in source query and deployed again and run, then the job succeeded. more than 15,000 failed.
    And in the 2nd experiment, I removed slowly changing dimension task and implemented normal lookup for insert/update, and set the source reading limit again to 4,00,000 and deployed again and run, then the job succeeded.
    Now i am not able to figure out, what exactly is the problem with Slowly changing dimension task for more than 15,00 records in SQL Server  Agent Job run?
    Can anybody pls help me out.
    Thanks
    Bikram

    Hi Vikash,
    As i have mentioned in the above post, below 2 scenarios: 
    "Then i opened Provider package in SSDT and changed the source reading record limit from 4,00,000 to 15,000 in source
    query and deployed again and run, then the job succeeded. more than 15,000 failed.
    And in the 2nd experiment, I removed slowly changing dimension task and implemented normal lookup for insert/update, and set the source reading limit again to 4,00,000 and deployed again and run, then the job succeeded."
    That means i am able to connect to sql server.
    But if i change the 1st scenario and read 4,00,000 records, the job fails and shows the above mentioned error.
    Similarly in the 2nd scenario, if i implement SCD look up,  the job fails and shows the above mentioned error.
    And i am consistently reproducing this.
    Thanks
    Bikram

  • Error trying to run SSIS Package via SQL Server Agent: DTExec: Could not set \Package.Variables[User::VarObjectDataSet].Properties[Value] value to System.Object

    Situation:
    SSIS Package designed in SQL Server 2012 - SQL Server Data Tools
    Windows 7 - 64 bit.
    The package (32 bit) extracts data from a SQL Server db to an Excel Output file, via an OLE DB connection.
    It uses 3 package variables:
    *) SQLCommand (String) to specify the SQL Statement to be executed by the package
    Property path: \Package.Variables[User::ExcelOutputFile].Properties[Value]
    Value: f:\Output Data.xls
    *) EXCELOutputFIle (String) to specify path and filename of the Excel output file
    Property path: \Package.Variables[User::SQLCommand].Properties[Value]
    Value: select * from CartOrder
    *) VarObjectDataSet (Object) to hold the data returned by SQL Server)
    Property path: \Package.Variables[User::VarObjectDataSet].Properties[Value]
    Value: System.Object
    It consists out of 2 components:
    *) Execute SQL Task: executes the SQL Statement passed on via a package variable. The resultng rows are stored in the package variable VarObjectDataSet
    *) Script Task: creates the physical output file and iterates VarObjectDataSet to populate the Excel file.
    Outcome and issue:The package runs perfectly fine both in SQL Server Data Tools itself and in DTEXECUI.
    However, whenever I run it via SQL Server Agent (with 32 bit runtime option set), it returns the errror message below.
    This package contains 3 package variables but the error stating that a package variable can not be set, pops up for the VarObjectDataSet only.  This makes me wonder if it is uberhaupt possible to set the value of a package variable
    of type Object.
    Can anybody help me on this please ?
    Message
    Executed as user: NT Service\SQLSERVERAGENT. Microsoft (R) SQL Server Execute Package Utility  Version 11.0.2100.60 for 32-bit  Copyright (C) Microsoft Corporation. All rights reserved.    Started:  6:40:20 PM  DTExec: Could
    not set \Package.Variables[User::VarObjectDataSet].Properties[Value] value to System.Object.  Started:  6:40:20 PM  Finished: 6:40:21 PM  Elapsed:  0.281 seconds.  The package execution failed.  The step failed.
    Thank you very much in advance
    Jurgen

    Hi Visakh,
    thank you for your reply.
    So, judging by your reply, not all package variables used inside a package need to be set a value for when run in DTEXEC ?
    I already tried that but my package ended up in error (something to do with "... invocation ...." and that error is anything but clearly documented. Judging by the error message itself, it looks like it could be just about anything. that is why I asked my
    first question about the object type package variable.
    Now, I will remove it from the 'set values' list and try another go cracking the unclear error-message " ... invocation ...". Does an error message about " ... invocation ..." ring any bells, now that we are talking about it here ?
    Thx in advance
    Jurgen
    Yes exactly
    You need to set values only forthem which needs to be controlled from outside the package
    Any variable which gets its value through expression set inside package or through a query inside execute sql task/script task can be ignored from DTExec
    Ok I've seen the invocation error mostly inside script task. This may be because some error inside script written in script task. If it appeared after you removed the variable then it may because some reference of variable existing within script task.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Urgent : Change in displayed column name of the Metadata

    Hi friends,
    I am required to dispaly a field called "Ansprechpartner" in my layout set.
    As I did not get any corresponding Metadata for it in the ContentManagement,
    I used the owner field instead. But the Deutche name for it on doing language change is "VerantWortlicher".
    Now it is required that the same data under owner column is dispalyed with the change in column name from VerantWortlicher to Ansprechpartner.
    I tried changing the key label in metadata but in vain.
    Please help me out...
    Sweta.

    Hi,
    I would recommand to duplicate the owner property instead of changing it to show a different label.
    So the steps should be:
    1. Duplicate the owner and create your own
    2. Follow the Changing Labels for Properties tutorial ro change label:
    http://help.sap.com/saphelp_nw70/helpdata/en/65/6fc63ed4027f6be10000000a114084/frameset.htm
    3. Show this property in your CollectionRenderer instead of owner
    Greetings,
    Praveen Gudapati
    [Points are always welcome for helpful answers]

  • Dynamically change the report column name.

    Hi All,
    I have a report where i am showing data for greater than current week and year and the code of same is below
    SELECT
        item_number,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+1 THEN
                    quantity
            END
        ) plus_1,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+2 THEN
                    quantity
            END
        ) plus_2,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+3 THEN
                    quantity
            END
        ) plus_3,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+4 THEN
                    quantity
            END
        ) plus_4,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+5 THEN
                    quantity
            END
        ) plus_5,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+6 THEN
                    quantity
            END
        ) plus_6,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+7 THEN
                    quantity
            END
        ) plus_7,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+8 THEN
                    quantity
            END
        ) plus_8,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+9 THEN
                    quantity
            END
        ) plus_9,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+10 THEN
                    quantity
            END
        ) plus_10,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+11 THEN
                    quantity
            END
        ) plus_11,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+12 THEN
                    quantity
            END
        ) plus_12,
        SUM
        (   CASE
                WHEN year_week_num = to_number(to_char(sysdate,'IW'))+13 THEN
                    quantity
            END
        ) plus_13     
    FROM
        (   select
                re.item_number,
                row_gen.year_week_num,
                SUM(NVL(re.quantity,0)) OVER(PARTITION BY re.item_number ORDER BY row_gen.year_week_num) quantity,
                ROW_NUMBER() OVER(PARTITION BY re.item_number, row_gen.year_week_num ORDER BY NULL) rn
            from
                (   SELECT
                        to_number(to_char(sysdate,'IW')) + ROWNUM year_week_num
                    FROM
                        DUAL
                    CONNECT BY LEVEL <= 13
                ) row_gen LEFT OUTER JOIN
                        (   SELECT
                                le.item_number,
                                le.quantity,
                                to_number(to_char(sysdate,'IW'))+1 year_week_num
                            FROM
                                BACKLOG_WEEK_WH_AFTR_ATP le
                            UNION ALL
                            SELECT
                                re.item_number,
                                -re.quantity,
                                to_number(substr(re.year_week,-2,2)) year_week_num
                            FROM
                                BACKLOG_ATP_GT_CW re
                         ) re
                    PARTITION BY (re.item_number)
                    ON ( row_gen.year_week_num = re.year_week_num)
    WHERE
        rn = 1
    GROUP BY
        item_numberI have a item in the report page from which i am displaying which week this year holds and the code of same is below
    In the item source, i have selected source type as sql query return single value. and the current week is returning as "2011-WK30"
    select to_char(sysdate,'YYYY"-WK"IW') from dual;Please suggest how i can change the display of column dynamically. I want PLUS_1 to show as 2011-WK31, PLUS_2 to show as 2011-WK32 and so on for this week. When next week will come then the plus_1 should show as 2011-WK32, plus_2 to show as 2011-WK33.
    Any help how to do this?
    Thanks in advance
    Regards

    Hi,
    Go to the Report Attributes, in the Column Attributes section:
    1) Use Headings Type as PL/SQL.
    2) In the text area Function returning colon delimited headings write a PL/SQL anonymous block returning colon delimited headings.
    Hope it helps!
    Regards,
    Kiran

  • Filtering extended event in sql server 2008 r2

    This code has been generated in sql server 2012 (using the graphical interface).
    CREATE EVENT SESSION [backupsmssql] ON SERVER
    ADD EVENT sqlserver.sp_statement_starting(
    ACTION(
    sqlserver.client_app_name,
    sqlserver.client_hostname,sqlserver.nt_username,
    sqlserver.session_nt_username,sqlserver.sql_text,
    sqlserver.username)
    WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'%backup database%'))
    WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
    If I try to run it on sql server 2008 r2, the filtering part seems to be misinterpreted and the following error is thrown:
    Msg 25706, Level 16, State 8, Line 1
    The event attribute or predicate source, "sqlserver.sql_text", could not be found.
    If I remove the where clause, the statement runs fine even though the sqlserver.sql_text is returned as part of the actions.  So obviously the "sqlserver.sql_text" is existant.  Why would I receive a message it does not exists in the
    where clause?  Was the "like_i_sql_unicode_string" inexistent in 2008 r2 or has the syntax changed in 2012.  How can we filter sql_text in 2008 r2?  I can't seem to find any doc regarding this, help would be appreciated.
    p.s. There is a very similar question here but it has been closed by the moderators and does not answer the question:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/76c2719c-ea02-4449-b59e-465a24c37ba8/question-on-sql-server-extended-event?forum=sqlsecurity

    You are on the right track:
    The differences in the available events and predicates (source and compare) between SQL Server 2008/R2 and 2012 are quite substantial.
    So the LIKE-operator is not available at all under 2008/R2 as a comparison-predicate, and sql_text is also not available as a source-predicate - only as action itself. - One has to realize, that actions really are not automatically also predicates.
    For a complete list of predicates you can query like this:
    SELECT dm_xe_packages.name AS package_name,
    dm_xe_objects.name AS source_name,
    dm_xe_objects.description
    , dm_xe_objects.object_type
    FROM sys.dm_xe_objects AS dm_xe_objects
    INNER JOIN sys.dm_xe_packages AS dm_xe_packages
    ON dm_xe_objects.package_guid = dm_xe_packages.guid
    WHERE
    (dm_xe_packages.capabilities IS NULL OR dm_xe_packages.capabilities & 1 = 0)
    AND (dm_xe_objects.capabilities IS NULL OR dm_xe_objects.capabilities & 1 = 0)
    AND dm_xe_objects.object_type
    IN ( 'pred_source', 'pred_compare')
    ORDER BY dm_xe_objects.object_type
    Unfortunately for your specific filter there is not workaround for Extended Events.
    You would have to resort to another predicate for filtering altogether.
    BUT: if you are on Enterprise Edition, why not use Auditing. There is a Audit-Group for Backup/Restore.
    It would be really simple like the following:
    CREATE SERVER AUDIT SPECIFICATION [Audit_BackupRestores]
    FOR SERVER AUDIT [AuditTarget]
    ADD (BACKUP_RESTORE_GROUP)
    If you are on Standard, you found yet another reason to upgrade to a supported version of SQL Server, I am afraid to say..
    Andreas Wolter (Blog |
    Twitter)
    MCSM: Microsoft Certified Solutions Master Data Platform, MCM, MVP
    www.SarpedonQualityLab.com |
    www.SQL-Server-Master-Class.com

  • Error loading data into Essbase from the SQL Server

    Hello experts!
    I've got another urgent and confusing issue. I am loading data from an SQL Server view into Essbase (which is reversed with multiple data columns) using ODI 11.1.1.5 and I get the following error at +"3-Loading -SrcSet0-Load Data"+ step:
    ODI-1227: Task SrcSet0 (Loading) fails on the source MICROSOFT_SQL_SERVER connection.
    +Caused By: java.sql.SQLException: [FMWGEN][SQLServer JDBC Driver][SQLServer]Incorrect syntax near the keyword 'View'+
    where "View" is the name of the dimension (non-data) column in the SQL view.
    Please help me with any hints! Thank you so much!

    John, thank you so much!
    Your answer is exactly correct!
    "View" is an object name in SQL Server so this word cannot be used as a column name in SQL Server tables. Once the name of the colunm changed from "View" to another, the interface runs without errors!

  • Special Column names in SQL Server

    Hi,
    I am trying to use ODI to load data from SQL Server to Oracle. My problem is having special column names in the old SQL Server database. The columns have names such as 9500Column1. I tried enclosing the names with square barckets in the model definition but it did not work.
    Any ideas? Any modifications I might make to the LKM?
    Nimrod

    Try enclosing the special fields in double quotes instead of [ and ]. Yes, that isn't the T-SQL standard, but it works, even in SS Studio.
    I had a SELECT statement in a Procedure that died when it hit a space in the column name. After hours of unsuccessful attempts to escape [ and ] it occurred to me that maybe " and " would do the trick as that is the PL/SQL standard. I think the proverbial feather could have knocked me over. Or was that me kicking my self in the rear?
    Regards,
    Cameron Lackpour

  • SQL Server Change Notification

    We are running SQL Server 2008 for our enterprise application (MES), for tracking product as it moves through the shop floor.  All of the terminals are hooked to the same database through a data layer to the same database.  We need a mechanism
    for updating the shop floor computers when a change takes place to a few tables.  Right now we have a timer on the form that re-queries the database every 30 seconds.  While in testing, this was fine, but now we have more terminals and more data
    flowing around, changes are not showing up in a timely manner.  Although 30 seconds seems like a long time in between queries, product may move from one station to another within the 30 second window so it doesn't show up in the next queue for up to 30
    seconds delaying work.  Right now the proposed work around from the developers is to put an "update" button on the screen to force an update.  This works, however, it does interfere with the work flow of the users. 
    I am wondering if there is a way for SQL Server to make a "shout out" when a table gets an update, delete, insert operation and send a refresh event that the stations could listen to and if the table they are monitoring changes, do a refresh from
    the table.  There has been discussion of creating a table in the database that gives the operation (insert, update, delete) and the table name monitoring every second, then having the service send out an update.  The table be monitored would be updated
    by SQL Server after each insert, update, and delete.  Once the service picks up the records, I would delete the records keeping the table light so a constant query every 3-5 seconds or so won't pull down SQL Server, but again, there is a potential of
    breaking and falling back to the refresh button.
    So if SQL Server could just notify this table changed, and this value changed in this column (we only need to monitor one column so we know which station to update) and have the application do the update.
    I know this is long and I apologize, but there has to be a way to monitor a database of changes besides a timed query from a program that is returning a full dataset even if it doesn't need to.  All I can see is SQL performance continuing to drop as
    more stations come on line.
    Any thoughts or suggestions, or point to articles/code would be much appreciated.  We are using SQL Server 2008 (RTM) Standard Edition on WIndows NT 6.0 X86. 
    Thanks.

    >> We need a mechanism for updating the shop floor computers
    DO you mean that you use a local database on each machine, and you want to updates it?
    I will assume this is what you are doing for now...
    My answer based on the assumption that the situation is clear to me, and it is probably not since you can not give us all the information in forum :-)
    >> Right now we have a timer
    Scheduled jobs are not fit for your situation as I understand it. THere are 2 main disadvantages (1) between the Scheduled jobs the data might be inconsistent. (2) If there is no change
    in the data, you execute the same jobs with no need (waste resources).
    >>   While in testing, this was fine, but
    This is the most common issue that I love :-)
    Most of my work come from those situations, when companies developed their system on developing but on production they need to start from scratch. The magic is pre-plan or Application Architecture :-) 
    >>
    "update" button on the client side
    This is a good options in some cases (I use it in several applications)
    >> "shout out" when a table gets an update, delete, insert operation
    Developers call this "shot up" EVENT :-) and the idea is using PUSH instead of PULL like in the "update" button
    The answer is yes. there are several options that can be use but I am not sure that I would chose this solution (my prefered solution in the end)
    * Use
    Profiler or
    Extended Event (I recommend not to chose this option)
    * Use
    Triggers on DML (in the trigger you can use any query like update the external local databases). This option will make any insertqdelete more complext and will need more resources, but you are talking about 1 query per second then this might work OK. working
    with 100K per second will make your operation slow, and probably there will be locks. You can use Service Broker in order to communicate directly with your application, and working a-sincronic. This will improve your work a lot.
    * Manage the data in the application, and not in the database! This is my prefered option! all the local application should comunicate directly with the main database. keep some information in memory. Keep open "channel" to the main server
    (just like developing a chat). 
    There are several build in option in .Net for push technology. I recommend to use Google to find the best option for you (for example look for WebSocket).
    *** I will have more to write in a min... I have phone ***
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Help with creating a sql file that will capture any database table changes.

    We are in the process of creating DROP/Create tables, and using exp/imp data into the tables (the data is in flat files).
    Our client is bit curious to work with. They do the alterations to their database (change the layout, change the datatype, drops tables) without our knowing. This has created a hell lot of issues with us.
    Is there a way that we can create a sql script which can capture any table changes on the database, so that when the client trys to execute imp batch file, the sql file should first check to see if any changes are made. If made, then it should stop execution and give an error message.
    Any help/suggestions would be highly appreciable.
    Thanks,

    Just to clarify...
    1. DDL commands are like CREATE, DROP, ALTER. (These are different than DML commands - INSERT, UPDATE, DELETE).
    2. The DDL trigger is created at the database level, not on each table. You only need one DDL trigger.
    3. You can choose the DDL commands for which you want the trigger to fire (probably, you'll want CREATE, DROP, ALTER, at a minimum).
    4. The DDL trigger only fires when one of these DDL commands is run.
    Whether you have 50 tables or 50,000 tables is not significant to performance in this context.
    What's signficant is how often you'll be executing the DDL commands on which the trigger is set to fire and whether the DDL commands execute in acceptable time with the trigger in place.

  • SQL Server 2000 auditing/tracking DML changes on specific tables.

    Hello,
    We would like to audit/track all DML activities on couple of tables which reside on SQL Server 2000. Triggers are an option we think but thought that would impact performance because of activity on those tables. Could someone please suggest any other best
    options that are possible with SQL Server 2000 to capture these changes.
    Thanks,

    Hello,
    You may consider third party tools that may still support SQL Server 2000. The following tool supported SQL Server 2000 a few years
    ago, not sure about today:
    http://www.apexsql.com/sql_tools_comply.aspx
    Server side traces may be another option:
    http://msdn.microsoft.com/en-us/library/cc293613.aspx
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

Maybe you are looking for

  • Web Start and Web Application

    I have a scenario as given below. I have a Central Server hosting a web application(JSPs, jars) in Jetty 6.0.0. I have many branch servers having Jetty 6.0.0 installed and the web application deployed. Now, I want to update the web application on all

  • How to create a pool of SQL2005 to use in J2EE1.4 deployement tool

    Hi everybody, I am RP Singh. Can anybody help me??? I want to create a pool of sql2005 in j2ee1.4 deployment tool. I dont know that how to create it. currently i am working on sql2000 and i know the driver name as well the three .jar file. I have inc

  • MI Client Create Order

    Hi, I was trying to create order on mobile devices but I get the following error on the merep_mon: 18.06.2007 15:51:21 Z_MEREP_GEN_S01_____MAM30_001 has started for run number 0000005801 and runtime counter 18.06.2007 15:51:21 Downloader completed su

  • Dbms_lob.createtemporary

    If I issue a dbms_lob.createtemporary(X, TRUE,dbms_lob.call); command within a PL/SQL procedure is it necessary to follow up with a dbms_lob.close(X) command or by its temporary nature will it automatically close and free itself up when the procedure

  • How my Ubuntu/Firefox 3.6.16 turned into Microsoft I.E. with a Virus Warning.

    When I was 'Surfing' the Internet, looking for some "Train" Photo Wallpapers, when I clicked on a Photo to view, my Firefox 3.6.16 browser turned into a "Fake" I.E. Browser Page with a Windows "Defender" warning box in the center stating that my syst