Migrating CMP EJB mapped to SQL Server table with identity column from WL

Hi,
    I want to migrate an application from Weblogic to SAP Netweaver 2004s (7.0). We had successfully migrated this application to an earlier version of Netweaver. I have a number of CMP EJBs which are mapped to SQL Server tables with the PK as identity columns(autogenerated by SQL Server). I am having difficulty mapping the same in persistant.xml. This scenario works perfectly well in Weblogic and worked by using ejb-pk in an earlier version of Netweaver.
   Please let me know how to proceed.
-Sekhar

I suspect it is the security as specified in the message. E.g .your DBA set the ID columns so no user can override values in it.
And I suggest you 1st put the data into a staging table, then push it to the destination, this does not resolve the issue, but ensures better processing.
Arthur
MyBlog
Twitter

Similar Messages

  • Error inserting a row into a table with identity column using cfgrid on change

    I got an error on trying to insert a row into a table with identity column using cfgrid on change see below
    also i would like to use cfstoreproc instead of cfquery but which argument i need to pass and how to use it usually i use stored procedure
    update table (xxx,xxx,xxx)
    values (uu,uuu,uu)
         My component
    <!--- Edit a Media Type  --->
        <cffunction name="cfn_MediaType_Update" access="remote">
            <cfargument name="gridaction" type="string" required="yes">
            <cfargument name="gridrow" type="struct" required="yes">
            <cfargument name="gridchanged" type="struct" required="yes">
            <!--- Local variables --->
            <cfset var colname="">
            <cfset var value="">
            <!--- Process gridaction --->
            <cfswitch expression="#ARGUMENTS.gridaction#">
                <!--- Process updates --->
                <cfcase value="U">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                    <cfquery datasource="#application.dsn#">
                    UPDATE SP.MediaType
                    SET #colname# = '#value#'
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <!--- Process deletes --->
                <cfcase value="D">
                    <!--- Perform actual delete --->
                    <cfquery datasource="#application.dsn#">
                    update SP.MediaType
                    set Deleted=1
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <cfcase value="I">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                   <cfquery datasource="#application.dsn#">
                    insert into  SP.MediaType (#colname#)
                    Values ('#value#')              
                    </cfquery>
                </cfcase>
            </cfswitch>
        </cffunction>
    my table
    mediatype:
    mediatypeid primary key,identity
    mediatypename
    my code is
    <cfform method="post" name="GridExampleForm">
            <cfgrid format="html" name="grid_Tables2" pagesize="3"  selectmode="edit" width="800px" 
            delete="yes"
            insert="yes"
                  bind="cfc:sp3.testing.MediaType.cfn_MediaType_All
                                                                ({cfgridpage},{cfgridpagesize},{cfgridsortcolumn},{cfgridsortdirection})"
                  onchange="cfc:sp3.testing.MediaType.cfn_MediaType_Update({cfgridaction},
                                                {cfgridrow},
                                                {cfgridchanged})">
                <cfgridcolumn name="MediaTypeID" header="ID"  display="no"/>
                <cfgridcolumn name="MediaTypeName" header="Media Type" />
            </cfgrid>
    </cfform>
    on insert I get the following error message ajax logging error message
    http: Error invoking xxxxxxx/MediaType.cfc : Element '' is undefined in a CFML structure referenced as part of an expression.
    {"gridaction":"I","gridrow":{"MEDIATYPEID":"","MEDIATYPENAME":"uuuuuu","CFGRIDROWINDEX":4} ,"gridchanged":{}}
    Thanks

    Is this with the Travel database or another database?
    If it's another database then make sure your columns
    allow nulls. To check this in the Server Navigator, expand
    your DataSource down to the column.
    Select the column and view the Is Nullable property
    in the Property Sheet
    If still no luck, check out a tutorial, like Performing Inserts, ...
    http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/index.jsp
    John

  • Moving Access table with an autonumber key to SQL Server table with an identity key

    I have an SSIS package that is moving data from an Access 2010 database to a SQL Server 2008 R2 database.  Two of the tables that I am migrating have identity keys in the SQL Server tables and I need to be able to move the autonumber keys to the SQL
    Server tables.  I am executing a SQL Script to set the IDENTITY_INSERT ON before I execute the Data Flow task moving the data and then execute a SQL Script to set the IDENTITY_INSERT OFF after executing the Data Flow task.
    It is failing with an error that says:
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E21  Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was
    done.".
    Error: 0xC020901C at PGAccountContractDetail, PGAccountContractDetail [208]: There was an error with input column "ID" (246) on input "OLE DB Destination Input" (221). The column status returned was: "User does not have permission to
    write to this column.".
    Error: 0xC0209029 at PGAccountContractDetail, PGAccountContractDetail [208]: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR.  The "input "OLE DB Destination Input" (221)" failed because error code 0xC020907C occurred, and the
    error row disposition on "input "OLE DB Destination Input" (221)" specifies failure on error. An error occurred on the specified object of the specified component.  There may be error messages posted before this with more information
    about the failure.
    Error: 0xC0047022 at PGAccountContractDetail, SSIS.Pipeline: SSIS Error Code DTS_E_PROCESSINPUTFAILED.  The ProcessInput method on component "PGAccountContractDetail" (208) failed with error code 0xC0209029 while processing input "OLE DB
    Destination Input" (221). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.  There may be error messages posted
    before this with more information about the failure.
    Any ideas on what is causing this error?  I am thinking it is the identity key in SQL Server that is not allowing the update.  But, I do not understand why if I set IDENTITY_INSERT ON.  
    Thanks in advance for any help/guidance provided.

    I suspect it is the security as specified in the message. E.g .your DBA set the ID columns so no user can override values in it.
    And I suggest you 1st put the data into a staging table, then push it to the destination, this does not resolve the issue, but ensures better processing.
    Arthur
    MyBlog
    Twitter

  • Migrating from Sql Server tables with column name starting with integer

    hi,
    i'm trying to migrate a database from sqlserver but there are a lot of tables with column names starting with integer ex: *8420_SubsStatusPolicy*
    i want to make an offline migration so when i create the scripts these column are created with the same name.
    can we create rules, so when a column like this is going to be migrated, to append a character in front of it?
    when i use Copy to Oracle option it renames it by default to A8420_SubsStatusPolicy
    Edited by: user8999602 on Apr 20, 2012 1:05 PM

    Hi,
    Oracle doesn't allow object names to start with an integer. I'll check to see what happens during a migration about changing names as I haven't come across this before.
    Regards,
    Mike

  • Unable To Select From SQL Server table with more than 42 columns

    I have set up a link between a Microsoft SQL Server 2003 database and an Oracle 9i database using Heterogeneous Services (HSODBC). It's working well with most of the schema I'm selecting from except for 3 tables. I don't know why. The common denominator between all the tables is that they all have at least 42 columns each, two have 42 columns, one has 56, and the other one, 66. Two of the tables are empty, one has almost 100k records, one has has 170k records. So I don't think the size of the table matters.
    Is there a limitation on the number of table columns you can select from through a dblink? Even the following statement errors out:
    select 1
    from "Table_With_42_Cols"@sqlserver_db
    The error message I get is:
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message [Generic Connectivity Using ODBC]
    ORA-02063: preceding 2 lines from sqlserver_db
    Any assistance would be greatly appreciated. Thanks!

    Not a very efficient and space friendly design to do name-value pairs like that.
    Other methods to consider is splitting those 1500 parameters up into groupings of similar parameters, and then have a table per group.
    Another option would be to use "vertical table partitioning" (as oppose to the more standard horizontal partitionining provided by the Oracle partition option) - this can be achieved (kind of) in Oracle using clusters.
    Sooner or later this name-value design is going to bite you hard. It has 1500 rows where there should be only 1 row. It is not scalable.. and as you're discovering, it is unnatural to use. I would rather change that table and design sooner than later.

  • SQL Server 2014 Bug - with Clustered Index On Temp Table with Identity Column

    Hi,
    Here's a stored procedure.  (It could be shorter, but this will show the problem.)
              CREATE PROCEDURE dbo.SGTEST_TBD_20141030 AS
              IF OBJECT_ID('TempDB..#Temp') IS NOT NULL
                 DROP TABLE #Temp
              CREATE TABLE #Temp
               Col1        INT NULL
              ,Col2        INT IDENTITY NOT NULL
              ,Col3        INT NOT NULL
              CREATE CLUSTERED INDEX ix_cl ON #Temp(Col2)
              SET IDENTITY_INSERT #Temp ON;
              RAISERROR('Preparing to INSERT INTO #Temp...',0,1) WITH NOWAIT;
              INSERT INTO #Temp (Col1, Col2, Col3)
              SELECT 1,2,3
              RAISERROR('Insert Success!!!',0,1) WITH NOWAIT;
              SET IDENTITY_INSERT #Temp OFF;
    In SQL Server 2014, If I execute this (EXEC dbo.SGTEST_TBD_20141030)   It works.   If I execute it a second time - It fails hard with: 
            Msg 0, Level 11, State 0, Line 0
            A severe error occurred on the current command.  The results, if any, should be discarded.
            Msg 0, Level 20, State 0, Line 0
            A severe error occurred on the current command.  The results, if any, should be discarded.
    In SQL Server 2012, I can execute it over and over, with no problem.  I've discovered two work-a-rounds:   
    1) Add "WITH RECOMPILE" to the SP CREATE/ALTER statement, or 
    2) Declare the cluster index in the TABLE CREATE statement, e.g. ",UNIQUE CLUSTERED (COL2)" 
    This second option only works though, if the column is unique.    I've opted for the "WITH RECOMPILE" for now.  But, thought I should share.

    Hi,
    I did not get any error Message:
             CREATE TABLE #Temp
               Col1        INT NULL
              ,Col2        INT IDENTITY NOT NULL
              ,Col3        INT NOT NULL
              CREATE CLUSTERED INDEX ix_cl ON #Temp(Col2)
              SET IDENTITY_INSERT #Temp ON;
              RAISERROR('Preparing to INSERT INTO #Temp...',0,1) WITH NOWAIT;
              INSERT INTO #Temp (Col1, Col2, Col3)
              SELECT 1,2,3
              RAISERROR('Insert Success!!!',0,1) WITH NOWAIT;
              SET IDENTITY_INSERT #Temp OFF;
    SELECT * FROM #Temp
    OUTPUT:
    Col1 Col2
    Col3
    1 2 3
    1 2 3
    1 2 3
    Select @@version
    --Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64) 
    Oct 19 2012 13:38:57 
    Copyright (c) Microsoft Corporation
    Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Ncache, MemCache, and SQL Server tables With Joins

    Hello,
    I'd like to look into a caching solution which could not only store results of a query, but also the table structures to do joins on the caching layer eliminating the need to do it on the SQL Server side.  Essentially I want to cache the customers table
    and orders table so the look ups are done on the caching software, does anyone know of a good solution for this?
    Support my SQL MCM & Internal Training blog on AliRazeghi.Com.  Have a tough I.T. job and you need to talk to someone with no obligation?  Contact my certified
    associates and I at Rosonco.com!
    If you find a question answered please click 'mark as answer' or 'vote as helpful'.  This will help other users find answers quickly.  Thanks for visiting!

    Just to get an update on microsoft SQL Server caching solutions - what are they and where can we learn more about them?  

  • Selecting from a SQL Server 2005 with long column names ( 30 chars)

    Hi,
    I was able to set up a db link from Oracle 11.2.0.1 to SQL Server 2005 using DG4ODBC.
    My problem is that some column names in the Sql Server are longer than 30 chars and trying to select them gives me the ORA-00972: identifier is too long error.
    If I omit these columns the select succeeds.
    I know I can create a view in the sql server and query it instead of the original table, but I was wondering if there's a way to overcome it with sql.
    My select looks like this:
    select "good_column_name" from sometable@sqlserver_dblink -- this works
    select "good_column_name","very_long_column_name>30 chars" from sometable@sqlserver_dblink -- ORA-00972Thanks

    I tried creating a view with shorter column names but selecting from the view still returns an error.
    create view v_Boards as (select [9650_BoardId] as BoardId, [9651_BoardType] as BoardType, [9652_HardwareVendor] as
    HardwareVendor, [9653_BoardVersion] as BoardVersion, [9654_BoardName] as BoardName, [9655_BoardDescription] as BoardDescription,
    [9656_SlotNumber] as SlotNumber, [9670_SegmentId] as SegmentId, [MasterID] as MasterID, [9657_BoardHostName] as BoardHostName,
    [9658_BoardManagementUsername] as BoardManagementUsername, [9659_BoardManagementPassword] as BoardManagementPassword,
    [9660_BoardManagementVirtualAddress] as BoardManagementVirtualAddress, [9661_BoardManagementTelnetLoginPrompt] as
    MANAGEMENTTELNETLOGINPROMPT, [9662_BoardManagementTelnetPasswordPrompt] as MANAGEMENTTELNETPASSPROMPT,
    [9663_BoardManagementTelnetCommandPrompt] as MANAGEMENTTELNETCOMMANDPROMPT FROM Boards)performing a select * from this view in sqlserver works and show the short column names
    this is the error i'm getting for performing a select * from v_boards@sqlserver_dblink
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    [Microsoft][SQL Native Client][SQL Server]Invalid column name 'BoardManagementTelnetLoginProm'. {42S22,NativeErr = 207}[Microsoft]
    [SQL Native Client][SQL Server]Invalid column name 'BoardManagementTelnetPasswordP'. {42S22,NativeErr = 207}[Microsoft][SQL Native
    Client][SQL Server]Invalid column name 'BoardManagementTelnetCommandPr'. {42S22,NativeErr = 207}[Microsoft][SQL Native Client][SQL
    Server]Statement(s) could not be prepared. {42000,NativeErr = 8180}
    ORA-02063: preceding 2 lines from sqlserver_dblinkI also tried replacing the * with specific column names but it fails on the columns that have a long name (it doesn't recognize the short names from the view)
    what am I doing wrong?
    Edited by: Pyrocks on Dec 22, 2010 3:58 PM

  • How to use update trigger in sql server 2008 with specific column

    Hello friends currently my trigger updates on table update, and I need to change this to only fire when specific column changes.
    /****** Object: Table [dbo].[User_Detail] ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[User_Detail](
    [sno] [int] IDENTITY(1,1) NOT NULL,
    [userid] [nvarchar](50) NULL,
    [name] [nvarchar](max) NULL,
    [jointype] [nvarchar](50) NULL,
    [joinside] [nvarchar](50) NULL,
    [lleg] [nvarchar](50) NULL,
    [rleg] [nvarchar](50) NULL,
    [ljoining] [int] NULL,
    [rjoining] [int] NULL,
    [pair] [int] NULL
    ) ON [PRIMARY]
    GO
    /****** Object: Table [dbo].[User_Detail] table data ******/
    SET IDENTITY_INSERT [dbo].[User_Detail] ON
    INSERT [dbo].[User_Detail] values (1, N'LDS', N'LDS Rajput', N'free', N'Left', N'jyoti123', N'SUNIL', 6, 4, 4)
    INSERT [dbo].[User_Detail] VALUES (2, N'jyoti123', N'jyoti rajput', N'free', N'Left', N'mhesh123', N'priya123', 3, 2, 2)
    SET IDENTITY_INSERT [dbo].[User_Detail] OFF
    /****** Object: Table [dbo].[User_Detail] trigger ******/
    CREATE TRIGGER triggAfterUpdate ON User_Detail
    FOR UPDATE
    AS
    declare @userid nvarchar(50);
    declare @pair varchar(100);
    select @userid=i.userid from inserted i;
    select @pair=i.pair from inserted i;
    SET NOCOUNT ON
    if update(pair)
    begin
    insert into Complete_Pairs(userid,pair)
    values(@userid,1);
    end
    GO
    /****** Object: Table [dbo].[Complete_Pairs] Script Date: 05/22/2014 21:20:35 ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[Complete_Pairs](
    [Sno] [int] IDENTITY(1,1) NOT NULL,
    [userid] [nvarchar](50) NULL,
    [pair] [int] NULL
    ) ON [PRIMARY]
    GO
    my query is TRIGGER triggAfterUpdate is fired only when pair column in User_Details table is update only and when we update other column like ljoin or rjoin then my trigger is not fired
    please any one can suggest us how it can done or provide solution
    Jitendra Kumar Sr. Software Developer at Ruvixo Technologies 7895253402

    >select @userid=i.userid
    frominserted i;
            select
    @pair=i.pair
    frominserted i;
    The code above assumes a single row UPDATE.
    You have to setup the trigger for set processing like when 100 rows are updated in a single statement.
    UPDATE trigger example: http://www.sqlusa.com/bestpractices2005/timestamptrigger/
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • How to improve the updating a sql server table with another table in Oracle

    Hi there.
    I am trying to do the next updating in SSIS
    UPDATE S
    SET
    S.COLUMN_A = OT.COLUMN_A,
    S.COLUMN_B = OT.COLUMN_B
    FROM 
    SQLT1 S INNER JOIN ORACLET1  OT ON
    S. COLUMN_C = OT.COLUMN_C
    This is what I am doing:
    I am taking the Oracle data with the ODBC Source like this.
    Select column_A, column_B, column_C
    from OracleT1
    The thing is, the Oracle table has millions of registers and this updating is taking a lot. But I am not sure if it is maybe because of the design of the update query.
    I wonder is if there is another way to design this query or improve the performance of this task.
    Thanks a lot.

    Yes
    Use a OLEDB destination instead and save the records to a staging table
    Then use a subsequent Execute sql task and do the set based update in it as below
    UPDATE t
    SET COLUMN_A = s.COLUMN_A,
    COLUMN_B = s.COLUMN_B
    FROM SQLT1 t
    INNER JOIN stagingTable s
    ON s.COLUMN_C = t.COLUMN_C
    This would be much faster as its set based
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Migrating a table with BLOB column from one db to another

    Hello everybody,
    I have two databases D1 and D2. Each database contains table T1 (same structure in both databases). T has a BLOB column and is populated in D1. I have to move all the data from T database D1 into T database D2.
    D1 and D2 are located on different machines. T in D1 is a huge table (milions of records). What is the best solution to migrate the T data between the 2 databases?
    Any help will be appreciated.
    Thank you in advance.
    daniela

    Depending on the version of the database you have, you could use transportable tablespaces.
    http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96524/c04space.htm#9370

  • Passing data to different internal tables with different columns from a comma delimited file

    Hi,
    I have a program wherein we upload a comma delimited file and based on the region( we have drop down in the selection screen to pick the region).  Based on the region, the data from the file is passed to internal table. For region A, we have 10 columns and for region B we have 9 columns.
    There is a split statement (split at comma) used to break the data into different columns.
    I need to add hard error messages if the no. of columns in the uploaded file are incorrect. For example, if the uploaded file is of type region A, then the uploaded file should be split into 10 columns. If the file contains lesser or more columns thenan error message should be added. Similar is the case with region B.
    I do not want to remove the existing split statement(existing code). Is there a way I can exactly pass the data into the internal table accurately? I have gone through some posts where in they have made use of the method cl_alv_table_create=>create_dynamic_table by passing the field catalog. But I cannot use this as I have two different internal tables to be populated based on the region. Appreciate help on this.
    Thanks,
    Pavan

    Hi Abhishek,
    I have no issues with the rows. I have a file with format like a1,b1,c1,d1,e1, the file should be uploaded and split at comma. So far its fine. After this, if the file is related to region A say Asia, then it should have 5 fields( as an example). So, all the 5 values a1,b1..e1 will be passed to 5 fields of itab1.
    I also have region B( say Europe)  whose file will have only 4 fields. So, file is of the form a2,b2,c2,d2. Again data is split at comma and passed to itab2.
    If some one loads file related to Asia and the file has only 4 fields  then the data would be incorrect. Similar is the case when someone tries to load Europe file with 5 fields related data. To avoid this, I want to validate the data uploaded. For this, I want to count the no. of fields (seperated by comma). If no. of fields is 5 then the file is related to Asia or if no. of fields is 4 then it is Europe file.
    Well, the no. of commas is nothing but no. of fields - 1. If the file is of the form a1,b1..e1 then I can say like if no. of commas = 4 then it is File Asia.But I am not sure how to write a code for this.Please advise.
    Thanks,
    Pavan

  • Insert into a table with unique columns from another table.

    There are two tables,
    STG_DATA                                                          
    ORDER_NO    DIR_CUST_IND
    1002                     DNA
    1005                     GEN
    1005    
    1008                     NULL
    1001                     NULL
    1001                     NULL
    1006                     NULL
    1000                     ZZZ
    1001                     ZZZ
    FACT_DATA
    ORDER_NO    DIR_CUST_IND
    1005                      NULL
    1006                      NULL
    1008                      NULL
    I need to insert only unique [ORDER_NO] from STG_DATA to FACT_DATA with corresponding [DIR_CUST_IND]. Though STG_DATA has multiple rows with same ORDER_NO, I need to insert only one of that it can be any record.
    Sarvan

    CREATE TABLE #Level(ORDER_NO INT, DIR_CUST_IND CHAR(3))
    INSERT #Level
    SELECT 1002,'DNA' UNION
    SELECT 1005,'GEN' UNION
    SELECT 1005,NULL UNION
    SELECT 1008,NULL UNION
    SELECT 1001,NULL UNION
    SELECT 1001,NULL UNION
    SELECT 1006,NULL UNION
    SELECT 1000,'ZZZ' UNION
    SELECT 1001,'ZZZ'
    SELECT ORDER_NO,DIR_CUST_IND
    FROM( SELECT ROW_NUMBER()OVER(PARTITION BY ORDER_NO ORDER BY ORDER_NO) RowNum,*
    FROM #Level)A
    WHERE RowNum=1
    I hope this would give you enough idea. All you have to do is just write insert statement.
    Next time please post DDL & DML.
    Chaos isn’t a pit. Chaos is a ladder. Many who try to climb it fail and never get to try again. The fall breaks them. And some are given a chance to climb, but they refuse. They cling to the realm, or the gods, or love. Illusions. Only the ladder is real.
    The climb is all there is.

  • SQL Server table capture using Oracle SQL Developer

    Hello,
    I need to know how to set up the Oracle SQL Developer to captuer SQL Server table with correct field precision. For some reason, all NVARCHAR field precision sizes are the double of the original precision size after capturing the table.
    Example:
    SQL Server table 'A' with the following field:
    name NVARCHAR( 50)
    after table capture becomes:
    name NVARCHAR2(100)
    which is double of the original precision.
    Any feed back or input is greatly appreciated.
    Thank you.
    Message was edited by:
    qa2537

    i'm sorry
    --tsql correction
    merge into md_columns m
    using(
    select md_col.precision as prec, md_col.column_name,md_col.rowid as row_id
    FROM md_schemas md_s
    join md_tables md_tab on md_tab.schema_id_fk = md_s.id
    join md_catalogs md_cat on md_cat.id = md_s.catalog_id_fk
    inner join md_connections con on md_cat.connection_id_fk=con.id
    join md_columns md_col on md_col.table_id_fk = md_tab.id
    where --md_s.created_on = (select max(created_on) from md_catalogs)
    con.name like '%(corr%)'
    and md_col.column_type in ('nchar','nvarchar')
    ) merge_clause on (m.rowid = merge_clause.row_id)
    when matched then update set m.precision = merge_clause.prec/2
    that works, as long as you don't forget to commit like i did....
    hope it helps

  • Cannot create temporary table having identity column

    Hi experts,
    I saw the above error msg while running the following statement:
           create local temporary column table #tmp_table (c1 int GENERATED by default AS IDENTITY (start with 1 increment by 1), c2 int)
         Could not execute 'create local temporary column table #tmp_table(c1 int GENERATED by default AS IDENTITY (start with ...'
         SAP DBTech JDBC: [7]: feature not supported: cannot create temporary table having identity column: C1: line 1 col 48 (at pos 47)
    I understand we can support normal column table creation with identity column, but don't know why cannot support temporary column tables with identity column. Is there any configuration that can enable it for temporary column table? Or what can I do to support it indirectly, like writing a trigger to support it or something else?
    If not, then is there any future plan for this feature?
    Regards,
    Hubery

    Hi Hubery,
    I've heard this trail of arguments before...
    Customer has a solution... they want it on HANA... but they don't want to change the solution.
    Well, fair call, I'd say.
    The problem here is: there's a mix-up of solution and implementation here.
    It should be clear now, that changing DBMS systems (in any direction) will require some effort in changing the implementation. Every DBMS works a bit different than the others, given "standard" SQL or not.
    So I don't agree with the notion of "we cannot change the implementation".
    In fact, you will have to change the implementation anyhow.
    Rather than imitating the existing solution implementation on ASE, implement it on SAP HANA.
    Filling up tons of temporary tables is not a great idea in SAP HANA - you would rather try to create calculation views that present the data ad hoc in the desired way.
    That's my 2 cts on that.
    - Lars

Maybe you are looking for