Table splitting - source db ms sql

Dear All,
Can we use table splitter during export of database which running on MS SQL Server ?
If yes, then what we have to mention in table splitting file ?
example in oracle we are mentioning tablename%numberofsplit;ROWID.
Thanks,
Nikunj Thaker

Hi,
Let me try to help...
I think we need physical data server for SQL server.
Yes, you will
Do we need Logical schema also for SQL server?
Yes, you do.
We already have model for Oracle source. Do we need one for SQL server or is there a way we can use it for both.
You will need a new one, once they are from distinct techologies.
We need to add new LKM for SQL server.
No, you can use a generic SQL LKM
Do we need to change CKM?
Are you validating some kind of constraint? If not, doesn't use any! If yes, you don't need to use a new CKM if the staging area remains the same.
I believe this post can help you a little.... http://odiexperts.com/?p=652
Make any sense?
Cezar Santos
[www.odiexperts.com]
Edited by: Cezar Santos - www.odiexperts.com on 05/11/2009 17:40

Similar Messages

  • Export with Table Splitting : ORA-01115: IO error reading block from file

    Hello,
    We are in perfroming the last dryrun of our CU&UC conversion.
    The are now in the process of exporting the ECC6 system (Oracle 10.2.0.4.0, HPUX ia64) using sapinst features, "table splitting preparation"
    When doing so, we are facing critical errors :
    Creating file /export_uni/sapinst_splitting/ora_query3_tmp3_1.sql.
    ERROR 2010-08-11 10:27:28.881
    CJS-00084  SQL statement or script failed. DIAGNOSIS: Error message: ORA-12801: error signaled in parallel query server P002
    ORA-01115: IO error reading block from file 90 (block # 16640)
    ORA-27072: File I/O error
    HPUX-ia64 Error: 22: Invalid argument
    Additional information: 4
    Additional information: 16640
    Additional information: -1
    ORA-01115: IO error reading block from file 90 (block # 16640)
    ORA-27072: File I/O error
    HPUX-ia64 Error: 22: Invalid argument
    ORA-06512: at "SAPR3.TABLE_SPLITTER", line 775
    ORA-06512: at line 1
    I have therefore perfmed a dbverify ; no corruption has been recorded.
    When trying to perfrom the EXPORT, without table splitting ; it works fine ...but the processing time is extremely long, as you can imagine. Any help would be highly appreciated.Regards.
    Raoul

    Thank you Stefan,
    Our HPUX Release seems to be indeed 11v3,
    [root@:/root]# uname -a
    HP-UX B.11.31 U ia64 2566039091 unlimited-user license
    I'll check the installation of the  patch and keep you informed
    Thank you
    Raoul
    Edited by: Raoul Shiro on Aug 11, 2010 11:57 AM

  • Create Time Table in Source Database fails

    Is there an complete moron guide to creating a time dimension table in the data source? I have no idea why it is not working.
    In the data source I'm using windows authentication and my user is member of the db_owner role in the data source data base.
    - Created a multidimensional project connected to a SSAS instance running on my local computer.
    - Added a data source to a sql server 2008 R2 instance on a server in the same domain.
    - Created a time dimension
    - Clicked to create a data source view
    - Chose a random data source view name
    - Next next
    "Create failed for the Table 'domain\username.Time'."
    Nothing else, no reason why, just that it failed.

    Hi Molotch,
    In your scenario, you create a time dimension in an Analysis Services project which process fail, right? It's hard to give you the root reason since do not know the creation steps and detail error message.
    However, there is a more user-friendly way to create your time dimension is to use the built-in wizard in BIDS. Using the time dimension wizard is a pretty straight-forward process. Here is a blog that demonstrate how to create a time dimension using built-in
    wizard. For the detail information about it, please refer to the link below to see the using the Time Dimension Wizard section.
    http://blog.tangotechnologygroup.com/2011/01/19/creating-a-time-dimension/
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • ORA-01653: unable to extend table SYS.SOURCE$ by 64 in tablespace SYSTEM"

    Hi,
    While creating a package the , I got the following error.
    "ORA-00604: error occurred at recursive SQL level 1
    ORA-01653: unable to extend table SYS.SOURCE$ by 64 in tablespace SYSTEM"
    Could anyone please explain, how to solve this problem.
    Thank you,
    Regards,
    Gowtham Sen.

    solution: increase the size of the system tablespace.
    the text of all pl/sql objects is stored in the database by sys. packages, procedures, and functions are stored in sys.source$ (which is part of the USER_SOURCE view definition). so, you've created a lot of pl/sql, and the table wants to extend, but there isn't room.
    this is a major problem, because it means that nothing in system can extend. add another datafile, or put the tablespace on autoextend.

  • Table Data Source Search Result gives ClassCastException

    I set up a table data source and queried it using the following URL:
    http://machine_name:port/ultrasearch/query/search.jsp?usearch.p_mode=Advanced
    and specified my table data source. The result URLs
    came up with the right primary key id. However when I
    click the URL, I get:
    java.lang.ClassCastException: com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].sql.OrclCallableStatement
    at oracle.ultrasearch.query.DisplayUtil.displayTableUrl(DisplayUtil.java:131)     
    at display.jspService(_display.java:1568)     [SRC:/display.jsp:81]     
    at com.orionserver[Oracle9iAS (9.0.2.0.0) Containers for J2EE].http.OrionHttpJspPage.service(OrionHttpJspPage.java:56)     
    at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:302)     
    at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:407)     
    at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:330)     
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:336)     
    at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:59)     
    at oracle.security.jazn.oc4j.JAZNFilter.doFilter(JAZNFilter.java:283)     
    at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:523)     
    at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:269)     
    at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:735)     
    at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.AJPRequestHandler.run(AJPRequestHandler.java:151)     
    at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].util.ThreadPoolThread.run(ThreadPoolThread.java:64)
    I have specified NUMBER as the data type for my primary key column and it is of type NUMBER in my table DDL. Is that OK or could this be causing the problem?
    Dinesh

    Dinesh,
    Can you provide the following information for creating the table data source:
    - Is the table data source based on a table or a view?
    - Is the table/view in the local or remote database?
    - If the table is in the local database, is the table in the instance owner schema or another schema?
    - Do you login to Ultra Search Admin Tool as the instance owner or other users?
    - Does the instance owner schema have privileges to read the contents in the table/view?

  • Access odbc data sources from PL/SQL

    Dear All,
    I would like to know is there any way where i could access odbc data sources from pl/sql (i.e i would like to insert, update records into MSAccess table from pl/sql procedures, triggers). Would appreciate any help regarding this.

    The only way I know of how is to write and external function library and use that to access ODBC datasource ...if someone else knows something else I would be interesting in hearing about that also.

  • What is difference between using interface as source and table as source?

    I am working on a batch flow which need several steps to populate data from source to target. For example, I need 5 interfaces to finish final data loading. I can either use interface or use temporary table as source and target for the interface 2, 3, and 4. It looks like both case will use tables no matter use interface or use temporary table. So my question is if there is any difference between these two (using interface as source or use temporary table as source)?
    Thanks

    if you use a Table as source for the intermediate process, it will create a physical temporary table i your work rep(depends on you choice) and populate the data into the table. if you use a interface as a source, just it will create a sub query instead of temporary table.
    Thanks
    nidhi

  • I am trying to setup a data source for postgresql with cf10 update 14. i get a time out error retruned in less than 2 seconds. I setup a data source for MS SQL 2005 no problem.

    I need a little help, what have I done wrong or not done?
    Here is the error message that is returned;
    Connection verification failed for data source: Archuleta_xxxxxx
    java.sql.SQLException: Timed out trying to establish connection
    The root cause was that: java.sql.SQLException: Timed out trying to establish connection
    any ideas would be appreciated.

    I see this error in your output:
    2014-11-26 10:55:23,583 ERROR [ThemeAutoDeployer]
    java.io.FileNotFoundException: /tmp/liferay/com/liferay/portal/deploy/dependencies/liferay-theme.tld (Too many open files)
    I'm not across what the EBS recommended setting for this is, or if there is one.  But try running:
    ulimit -n
    ... and if the number is low, edit /etc/security/limits.conf , add some entries for increased "soft nofile" and "hard nofile" and run sysctl -p.  See Linux & Java tips: Too many open files .

  • Creating a new logging table in source

    Hi ,
    I have two relication setup , one is ECC to Ent hana and the other Ecc to BW on hana.
    For ECC to BW on hana , i want to create a seperate logging table in source  to diffrenciate from the two replication.
    Could you please let me know how to do this.
    Thanks,
    Rajiv

    Hi,
    this is unfortunately the wrong community, pls. have a look here: SAP LT Replication Server
    There you can post your question.
    Best,
    Heike

  • Error while using Oracle Table as source file :- ODQ for ODI

    Hi All,
    I am getting some errors while working on ODQ with Oracle Tables as source file.
    If I am trying with text files (*.txt) as source and output it works fine.
    Please let me know how we can connect to an oracle table which is my source file.
    In the exported project -> “settings” folder
    In the file named eN_transfmr_pXX.stx
    For
    /CATEGORY/INPUT/PARAMETER/INPUT_SETTINGS/ARGUMENTS/EN
    TRY/DATA_FILE_NAME=
    What I need to give? (URL – source file)
    I tried with
    1. jdbc:oracle:thin:@xxx.xxx.x.xx:1521:ORCL
    2. jdbc:oracle:thin:UserName/Password@// xxx.xxx.x.xx:1521:ORCL
    i am not sure , is there any thing missing ???
    (Note: for text file I am giving “D:\Sourcefolder\customer.txt”)
    If I am running the batch file directly from CMD Prompt it’s displaying the error message
    “Cannot open file”
    If I am connecting with ODI it’s displaying the error
    com.sunopsis.dwg.function.SnpsFunctionBaseException: OS command returned 3503. …………………….
    Thanks in advance…
    Rathish A M

    Hi Ratish,
    ODQ supports files as inputs not Oracle tables, what you should do is:
    - define an ODQ process that takes a file as an input.
    - create an ODI process that dumps your Oracle table into a file that will be used by ODQ. (interface or OdiSqlUnload step)
    - run the ODQ process in ODI (in a package)
    - create an ODI interface that will load your ODQ output file into a DB.
    You can profile Oracle tables directly using Oracle Data Profiling.
    Thanks,
    Julien

  • Unicode export:Table-splitting and package splitting

    Hi SAP experts,
    I know there are lot of forums related to this topic, but I have some new questions and hence posting a new thread.
    We are in the process of doing unicode conversion in our landscape(CRM 7.0 system based on NW 7.01) and we are running on AIX 6.1 and DB2 9.5. The database size is around 1.5 TB and hence, we want to go in for optimization for export and import in order to reduce the downtime.As a part of the process, we have tried to do table-splitting and parallel export-import to reduce the downtime.
    However, we are having some doubts whether this table-splitting has actually worked in our scenario,as the export has exeucted for nearly 28 hours.
    The steps followed by us :
    1.) Doing the export preparation using SAPINST
    2.) Doing table splitting preparation, by creating a table input file having entries in the format <tablename>%<No.of splits>.Also, we have used the latest R3ta file and the dbdb6slib.o(belonging to version 7.20 even though our system is on 7.01) using SAPINST.
    3.) Starting with the export using SAPINST.
    some observations and questions:
    1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    2.) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th split. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    3.) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    4.) Also, what exactly is the difference between table-splitting and package-splitting? Are they both effective together?
    If you have any questions and or need any clarifications and further inputs, please let me know.
    It would be great, if we could get any insights on this whole procedure, as we know a lot of things are taken care by SAPINST itself in the background, but we just want to be certain that we have done the right thing and this is the way it should work.
    Regards,
    Santosh Bhat

    Hi,
    First of all please ignore my very first response ... i have accidentally posted a response to some other thread...sorry for that 
    Now coming you your questions...
    > 1.) Can package splitting and table-splitting be used together? If yes or no, what exactly is the procedure to be followed. As I observed that, the packages also have entries of the tables that we decided to split. So, does package splitting or table-splitting override the other, and only one of the two can be effective at a time?
    Package splitting and table splitting works together, because both serve a different purpose
    My way of doing is ...
    When i do package split i choose packageLimit 1000 and also split out the tables (which i selected for table split)  into seperate package (one package per table). I do it because that helps me to track those table.
    Once the above is done i follow it up with the R3ta and wheresplitter for those tables.
    Followed by manual migration monitor to do export/import , as mentioned in the previous reply above you need to ensure you sequenced you package properly ... large tables are exported first , use sections in the package list file , etc
    > 2.) If you are well versed with table splitting procedure, could you describe maybe in brief the exact procedure?
    Well i would say run R3ta (it will create multiple select queries) followed by wheresplitter (which will just split each of the select into multiple WHR files)  ...  
    Best would go thought some document on table spliting and let me know if you have specific query. Dont miss the role of hints file.
    > 3.) Also, I have mentioned about the version of R3ta and library file in my original post. Is this likely to be an issue?Also, is there a thumb rule to decide on the no.of splits for a table.
    Rule is use executable of the kernel version supported by your system version. I am not well versed with 7.01 and 7.2 support ... to give you an example i should not use 700 R3ta on 640 system , although it works.
    >1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    If you ask for 10 split .... you will get 10 splits or in some case 11 also, the reason might be the field it is using to split the table (the where clause). But not 100% sure about it.
    > 2) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th plit. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    Not sure why you got 29 split when you asked for 36, one reason might be the field (key) used for split didn't have more than 28 unique records. I dont know how is PRCD_CLUST  split , you need to check the hints file for "key". One example can be suppose my table is split using company code, i have 10 company codes so even if i ask for 20 splits i will get only 10 splits (WHR's).
    Yes the 29th file will always have less records, if you open the 29th WHR you will see that it has the "greater than clause". The 1st and the last WHR file has the "less than" and "greater than" clause , kind of a safety which allows you to prepare for the split even before you have downtime has started. This 2 WHR's ensures  that no record gets missed, though you might have prepared your WHR files week before the actual migration.
    > 3) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    Not aware any thumb rule. First iteration you might choose something like 10 for 50 GB , 20 for 100 GB. If any of the tables overshoots the window. They you can give a try by  increase or decrease the number of splits for the table. For me couple of times the total export/import  time have improved by reducing the splits of some tables (i suppose i was oversplitting those tables).
    Regards,
    Neel
    Edited by: Neelabha Banerjee on Nov 30, 2011 11:12 PM

  • New table in msdb database of SQL 2014

    Hi All, I've seen a table in MDSB database of SQL 2014 CTP version.
    Table name is =  smart_backup_files
    Can any body explain the purpose of this table and where to use this one.
    Asheesh pandey SQL server DBA

     Smart Backup is a feature in SQL Server 2014 where  in SQL Server can now manage your back up schedule using the new Managed Backup to Windows Azure feature. It determines backup frequency based on database usage patterns.
    http://blogs.technet.com/b/dataplatforminsider/archive/2013/10/22/smart-secure-cost-effective-sql-server-back-up-to-windows-azure.aspx
    smart-backup_files- This table is useful in monitoring the backup status of the managed backups\smart backups.
    http://msdn.microsoft.com/en-us/library/dn449498(v=sql.120).aspx

  • Create and insert into table from Oracle to MS SQL server.

    Hello,
    Oracle Database 11g and Red hat 5
    I have a very different kind of issue. I am handling the ORACLE db(remote db with all the important data). On the other side their is a MS SQL server db(local db with some testing data in it). All the users will access the ORACLE db for the actual processing but for sometime they need to apply some of their own concepts. So they will transfer the data from ORACLE to MS sql server.
    I want to create a code in ORACLE db like a procedure , which will create a table in MS sql server , insert data into it,Also create some metadata table to keep some of my table's info on MS SQL serve db,If the table is present it should append the data, .... like many things ...
    Overall my question is , how can i write a code to make these operation on a remote db, that to these operations are DDL and on MS SQL Server(Non-Oracle) ???
    Please guide me with some ideas or solutions ...
    Also provide if you have some good links to study ...
    thanks in advance.

    I'm not sure why you never visit http://tahiti.oracle.com prior to asking any question. Is it forbidden in your locale? Are you afraid of it? Will your salary be decreased when you visit the documentation?
    http://www.oracle.com/pls/db111/search?word=sql+server&partno=
    should provide sufficient information.
    Your doc question must be considered a violation of Forum Etiquette and an abuse of this forum.
    Sybrand Bakker
    Senior Oracle DBA

  • Split Source Mapping Generation Task

    Hi
    I created a Dynamic Web Project in IBM Rational Application Developer Version 8. I am using BEA Weblogic 10.0.2 as the Application Server. When creating Dynamic web project, I added project to an EAR and named it testEAR. The project compiles without any errors and EAR is generated. When I try to add it to my server and deploy, I get the following error.
    Runtime exception occurred in publish task 'Split Source Mapping Generation Task'.
    Path must include project and resource name: /testEAR
    I checked web.xml for duplicate entries and didnt find anything. Below are my XML files for Deployment Assembley. The first is for the testEAR project
    <?xml version="1.0" encoding="UTF-8"?><project-modules id="moduleCoreId" project-version="1.5.0">
    <wb-module deploy-name="testEAR">
    <wb-resource deploy-path="/" source-path="/"/>
    <dependent-module archiveName="XYZ.war" deploy-path="/" handle="module:/resource/XYZ/XYZ">
    <dependent-object>WebModule_1308854753228</dependent-object>
    <dependency-type>uses</dependency-type>
    </dependent-module>
    </wb-module>
    </project-modules>
    The second is for the Dynamic Web Project
    <?xml version="1.0" encoding="UTF-8"?><project-modules id="moduleCoreId" project-version="1.5.0">
    <wb-module deploy-name="XYZ">
    <wb-resource deploy-path="/" source-path="/WebContent"/>
    <wb-resource deploy-path="/WEB-INF/classes" source-path="/src"/>
    <property name="context-root" value="XYZ"/>
    <property name="java-output-path" value="/XYZ/WebContent/WEB-INF/classes"/>
    </wb-module>
    </project-modules>
    Please let me know how to resolve this problem.

    Hi,
    This failure occurred probably because a Content Directory was not specified for the EAR project.
    If a Content directory is not specified, then ".settings" and ".project" files will be part of the source path for the bea compiler.
    As per the requirement, it is very important that WLS build scripts parse/compile and copy all the contents of the source folder.
    So, as per the process the ".settings" directory and the ".project" directory are also copied into the temporary build directory.
    This causes the failure during deployment as ".settings" and ".project" components can't be deployed.
    The Content directory is a requirement only in WLS environment, due to the build logic of WLS.
    By default, the WLS Eclipse tasks create a Content directory for a EAR project. So, this is not documented.
    But, when you import an existing EAR directory, this limitation is exposed.
    Creating the EAR Content folder should be the best solution.
    Steps to modify imported applications:
    1. Create a directory for the EAR content (for APP-INF, META-INF and all other modules or files that are part of the EAR) and copy all the files (except ".settings" and ".project") into the EAR content directory. (This has to be performed in windows explorer)
    2. Then edit the file .settings/org.eclipse.wst.common.component EAR content directory reference for deploy-path:
    For eg:
    <wb-resource deploy-path="/" source-path="/"/>
    to
    <wb-resource deploy-path="/" source-path="/EarContent"/>
    3. Refresh the eclipse project.
    4. Delete the temporary project folder that is created in the below location:
    <Workspace Home>\.metadata\.plugins\org.eclipse.core.resources\.projects\
    For eg:
    cd D:\Cases\EclipseWorkspace\.metadata\.plugins\org.eclipse.core.resources\.projects\
    del WebTestEAR\
    ** This is a temporary folder which will be created by WebLogic build scripts during
    deployment. **
    5. Now, try to deploy the EAR project.
    Hope it helps.

  • Data Source for MS SQL SERVER

    Need Help on how to add a data source from MS SQL Server. Please give an example using PUB as the database and the server is N12345\SQLSERVER. Thanks.

    Connecting to SQL 2000 server worked very easily. Make sure you start the SQL Server and the Sun Java Studio Creator IDE. In the Server Navigator pane, right click on the 'Data Sources' node and pick up the menu item' Add Source'. This brings up a dialogue box wherein you need to insert info regarding the hostname, password, database information etc. For a detailed screen shots of this you see my page at
    http://www.mysorian.com/htek. The last item in the scrolling page is the one you should look for.

Maybe you are looking for