Timestamp in Data Store defaults to Logical Length of 13

I inserted a data store in designer, the table has fields that are TIMESTAMP(6) WITH TIMEZONE
The Data Store column Logical Length defaults to 13.
The execution fails with ORA-30088 because the temp table ddl is TIMESTAMP(13) WITH TIMEZONE
I fixed the issue by changing the 13 to 6 on each column.
What I want to know is how can I change the default to 6?
Or why is ODI not picking it up properly?
Is there a fix for this?

Hi,
Goto Topology ------> Physical architecture-----> Oracle ----> expand data types and edit TIMESTAMP WITH TIME ZONE
See if the following are specified against "Create Table Syntax" and " Writable Datatype Syntax"
TIMESTAMP(%L) WITH TIME ZONE
if yes , then edit the same and remove (%L) from it .
i.e. it will become TIMESTAMP WITH TIME ZONE
Save it and execute your interface .
Thanks,
Sutirtha

Similar Messages

  • Timestamp column  when reversed in ODI ,Logical Length increases to 11

    Hi
    I have a Timestamp Column in Oracle Database. When i see in SQL Developer
    I see DataType: Timestamp(6),But when I reverse in ODI the Logical length Increases
    to 11 and this gives an error when I execute my interface.
    Like that I have Many timestamp column in my project and for the Interface to work
    I have to manually decrease the length from 11 to 6,then it works fine.
    Is there any workaround.
    Thanks in Advance.

    Hi,
    Trying to help you. :-)
    Try to use Datatypes options in ODI. Go to Topology mgr->Physical Arch, elapse Oracle and just play around Data types (try to create a datatype for timestamp or else edit the existing one)and give TIMESTAMP(%L).
    May be you can find a solution.
    All the Best.:)

  • Reverse engg data store using ODI SDK

    how to I reverse engineer a data store and not a model using ODI SDK
    Is there any class for reverse engg? Or should I use KM's?

    When I reverse engg my delimited or fixed file, I am able to see the data of datastore; I used same datastore in interface
    Yes , I used the data store in interface using code(sdk)
    I am not doing anything manually in studio; everything is done through java code (using sdk)...but I have checked data of datastore by right clicking on it, its giving proper data
    filedescriptor contains start position and num of bytes
    your text file is looks like FIXED format,
    my Delimited format text file:
    empNo,empName
    1,Deepa
    2,Deepali
    3,Patil
    4,Deeps
    and FIXED format text file:
    10   
    Georges                                     
    Hamilton                                    
    15/01/2001 00:00:00
    11   
    Andrew                                      
    Andersen                                    
    22/02/1999 00:00:00
    12   
    John                                        
    Galagers                                    
    20/04/2000 00:00:00
    13   
    Jeffrey                                     
    Jeferson                                    
    10/06/1988 00:00:00
    20   
    Jennie                                      
    Daumesnil                                   
    28/02/1988 00:00:00
    21   
    Steve                                       
    Barrot                                      
    24/09/1992 00:00:00
    22   
    Mary                                        
    Carlin                                      
    14/03/1995 00:00:00
    30   
    Paul                                        
    Moore                                       
    11/03/1999 00:00:00
    31   
    Paul                                        
    Edwood                                      
    18/03/2003 00:00:00
    32   
    Megan                                       
    Keegan                                      
    29/05/2001 00:00:00
    40   
    Rodolph                                     
    Bauman                                      
    29/05/2000 00:00:00
    41   
    Stanley                                     
    Fischer                                     
    12/08/2001 00:00:00
    42   
    Brian                                       
    Schmidt                                     
    25/08/1992 00:00:00
    50   
    Anish                                       
    Ishimoto                                    
    30/01/1992 00:00:00
    51   
    Cynthia                                     
    Nagata                                      
    28/02/1994 00:00:00
    52   
    William                                     
    Kudo                                        
    28/03/1993 00:00:00
    if file format is Delimited we need not to worry about physical length or logical length, bcoz we are able to do reverse engg for that; which automatically set all fields
    odiColumn.setLength(length); method sets both physical and logical length

  • No persistent store with the logical name could be found on the server

    Weblogic 10.3.6 standalone server (no cluster).
    I want to move ejb timers persistence from the default store to DB.
    I created a new data source, non-XA, global transactions not supported, targeted to my managed server. Tested configuratiion against DB: OK.
    I created a JDBC persistent store (type JDBCStore), attached it to data source, targeted to my managed server, committed changes OK.
    In my application, TimedObject is implemented by a Stateless Session bean.
    I modified the weblogic-ejb-jar.xml descriptor for my SLSB specifying the persistence store name:
    <stateless-session-descriptor>
                <timer-descriptor>
                    <persistent-store-logical-name>MyStore</persistent-store-logical-name>
                </timer-descriptor>
    When I deploy my application, I get the following error
    [EJB:011112]Error initializing the EJB Timer store for the EJB '...' The weblogic-ejb-jar.xml deployment descriptor or annotation for this EJB contains a persistent-store-logical-name setting but no persistent store with the logical name 'MyStore' could be found on the server 'yyyy'.
    Why ?

    Hi JB,
    The OTM schema has been created on an external database and not on the embedded XE database. Using 'Oracle Application Testing Database Configuration' we were able to successfully configure the said DB. We are able to use OTM through the 'Oracle Test Manager - Win32 Client' also. We are being held up by the above error when we attempt to deploy and use the server based OTM application.
    Do we need to configure the above Schema under the Weblogic console also?
    Regards,
    Aniket.

  • Unable to load data store in a new model

    Hi,
    I have Win XP Pro SP3. MS SQL 2008 which has adventure works DW2008R2 database. I have oracle data integrator.
    I have created a physical and logical schema and have established a connection with my MS SQL 2008 and the test connection is successful. I create a new model folder and a new model within. In the new model, I gave technology as microsoft sql server , logical schema name selected, and in reverse tad: standard, context:global and when i click on "reverse" no tables with are seens under the new model.
    no error message either. I have been trying to figure this out for the last 12 hrs...can someone please suggest/guide me here please. this is my first time using ODI.
    Goo

    Hi,
    Thanks for the reply. I thread you provided gives details about how to get connected. But as i mentioned earlier, i can connect. I am just not able to bring in data store...
    Any help will be appreciated. I can send the screen shot if you would like.
    Goo

  • How check what's stored in the data store at different steps in the model?

    I would like to place a table or form temporarily within the model to display the values in the data store as they change at different points at runtime.
    I tried simply adding a form, creating the same field types as are in the store, and then mapping the form fields' default values to the data store's fields. This does not work as it doesn't update automatically.
    Tips?
    Henning

    Hi Jarod,
                I have the similar problem of using the data form datastore as input for getting results.
    I wanted to know wether is it possible to use input field instead of expression box so that i can click submit button based on input displayed from data store.
    Also if i use expression box the values are getting concatenating instead i want to overwrite(dont know wether expressionbox value can be used as input) i couldnot overwrite the values.Can you let me know what is the function i need to use in the formula of expression box.
    I have posted the same in a thread
    want the data store values to be displayed in input field of form
    Thank You
    K.Srinivas

  • How to Set a Variable with data from Srouce Data Store

    Hello ODI Experts,
    I have created a Physical & Logical Schema and a Source Data store to pickup data from a database table.
    On the other hand, I have a few variable that I will pass in a web service call (ODIInvokeWebService tool).
    Would yo please guide how I can set variables from my source data store.
    Thanks & Regards,
    Ahsan

    Hello Bos/Damodhar/ODI Experts,
    Doesn't it gives me a less optimized approach picking one column per query (per variable)?
    Lets say, I have to pick 35 columns from a table and put those in 35 variables...It would mean running 35 queries for fetching one record from the database table.
    Doesn't it seem less performance effective (less optimized)..a little scary..any thing better that I can do to make it more optimized?
    Another question, what if multiple new values have come in the DB table, since I am using Refresh Variable, would this variable have multiple values in it?
    Thanks for all your help,
    Ahsan
    Edited by: Ahsan Asghar on 21-Jun-2011 07:46

  • Use GET PERNR without any screen default from Logical DB.

    Could anyone help me how to avoid using screen default from logical DB if using GET PERNR?

    Are u saying using this logic below  , im can avoid using GET PERNR?
    TABLES: RP50G,PERNR,PYORGSCREEN,PYTIMESCREEN.
    DATA: IN_RGDIR LIKE PC261 OCCURS 0 WITH HEADER LINE,
          WA_RT LIKE PC207 OCCURS 0 WITH HEADER LINE,
          SEQNR LIKE PC261-SEQNR,
          RESULT TYPE PAY99_RESULT.
    DATA: M_START LIKE SY-DATUM,
          M_END LIKE SY-DATUM.
    FORM GET_PAYROLL USING P_PERNR M_START M_END.
    *This FM help us to get the Sequence number used for the
    *employee on the payroll.
      CALL FUNCTION 'CU_READ_RGDIR'
           EXPORTING
                PERSNR          = P_PERNR
           TABLES
                IN_RGDIR        = IN_RGDIR
           EXCEPTIONS
                NO_RECORD_FOUND = 1
                OTHERS          = 2.
    *We read it using two dates, which corresponds to the month *we need
      READ TABLE IN_RGDIR WITH KEY FPBEG = M_START
                                   FPEND = M_END.
      SEQNR = IN_RGDIR-SEQNR.
    *This FM actually reads the payroll and get the information
    *we need.
      CALL FUNCTION 'PYXX_READ_PAYROLL_RESULT'
           EXPORTING
                CLUSTERID                    = 'XX'
    *In CLUSTERID use the country of your choice
                EMPLOYEENUMBER               = P_PERNR
                SEQUENCENUMBER               = SEQNR
                READ_ONLY_INTERNATIONAL      = 'X'
           CHANGING
                PAYROLL_RESULT               = RESULT
           EXCEPTIONS
                ILLEGAL_ISOCODE_OR_CLUSTERID = 1
                ERROR_GENERATING_IMPORT      = 2
                IMPORT_MISMATCH_ERROR        = 3
                SUBPOOL_DIR_FULL             = 4
                NO_READ_AUTHORITY            = 5
                NO_RECORD_FOUND              = 6
                VERSIONS_DO_NOT_MATCH        = 7
                OTHERS                       = 8.
    *We just need to read the result table.
      LOOP AT RESULT-INTER-RT INTO WA_RT.
        CASE WA_RT-LGART.
          WHEN '9010'.
             MOVE WA_RT-BETRG TO T_ANYTABLE-SOMEPAY.
        ENDCASE.
      ENDLOOP.
    ENDFORM.

  • RBS Migration and Data Store Expension

    I'm seeking some insight on if (and how) remote blobs are migrated.  For example, if I've configured RBS for SharePoint 2010 but I'm approaching the storage maximum on the hardware of my remote blob location - how would I go about moving the blobs elsewhere and 'pointing' Sql Server and SharePoint to the new locations?  In addition, if I were to simply add another storage location - how does one go about re-configuring the RBS to store blobs in a new/additional location?
    TIA.
    -Tracy

    1.   
    Live SharePoint 2010 environment with SQL 2008 R2
    a.   
    Take back up from 2010 live server.
    i.     
    Open management studio on SQL server.
    ii.     
    Take back up of content database of live application.
    2.   
    QA SharePoint 2010 environment with SQL 2008 R2
    a.   
    Restore SQL backup
    i.     
    Open management studio on SQL server.
    ii.     
    Restore database.
    b.  
    Create Web Application
    i.     
    Open SharePoint server
    ii.     
    Open central admin
    iii.     
    Create web application with classic authentication.
    c.   
    Dismount database which is with existing application
    i.     
    Open SharePoint PowerShell on SharePoint server.
    ii.     
    Fire below command.
    Dismount-SPContentDatabase <Database name>
    Note: Change the database name.
    d.  
    Mount restored database with existing application
    i.     
    Open SharePoint PowerShell on SharePoint server.
    ii.     
    Fire below command.
    Mount-SPContentDatabase <Database name>  -DatabaseServer  <Database server name > -WebApplication <Web application>
    Note: Change the database name and web application URL.
    iii.     
    Open SharePoint Designer and change the master page and publish it.
    iv.     
    Set the Test page as Home page.
    v.     
    Test user logging
    Logging with the 2-3 different users and test they can able to logging.
    e.   
    Configure RBS
    i.     
    Enable FILESTREAM on the database server
    Open SQL Server Configuration manager on SQL Server.
    From left panel click on SQL Server Services.
    From right panel select the instance of SQL Server on which you want to enable FILESTREAM.
    Right-click the instance and then click Properties.
    In the SQL Server Properties dialog box, click the FILESTREAM tab.
    Select the Enable FILESTREAM for Transact-SQL access check box.
    If you want to read and write FILESTREAM data from Windows, click Enable FILESTREAM for file I/O streaming access. Enter the name of the Windows share in the Windows Share Name box.
    If remote clients must access the FILESTREAM data that is stored on this share, select allow remote clients to have streaming access to FILESTREAM data.
    Click Apply and ok.
    ii.     
    Set FILESTREAM access level
    Open SQL management studio and connect SQL database instance.
    Right click on database instance and open Property.
    Go to: click on advanced from left panel.
    Find the “Filestream Access Level” property and set the value “Full access enabled”
    Click OK and exit window.
    iii.     
    Set SharePoint Configure FILESTREAM access level
    Open Query window on root
    Execute  following query
    EXEC sp_configure filestream_access_level, 2
    RECONFIGURE
    Restart SQL services
    Note: You will get message” Configuration option 'filestream access level' changed from 2 to 2. Run the RECONFIGURE statement to install.”
    iv.     
    Provision a BLOB store for each content database
    Click the content database for which you want to create a BLOB store, and then click New Query
    Execute following query
    use [<Database name>]
    if not exists
    (select * from sys.symmetric_keys
    where name = N'##MS_DatabaseMasterKey##')
    create master key encryption by password = N'Admin Key Password !2#4'
    Note:
    Change the database name
    You get “Command(s) completed successfully.” Message.
    use [<Database name>]
    if not exists
    (select groupname from sysfilegroups
    where groupname=N'RBSFilestreamProvider')
    alter database [<Database name>]
    add filegroup RBSFilestreamProvider contains filestream
    Note:
    Change the database name.
    You get “Command(s) completed successfully.” Message.
    use [<Database name>]
    alter database [<Database name>]
     add file (name = RBSFilestreamFile, filename =
    '<E:\SQL\Data\PetroChina>')
    to filegroup RBSFilestreamProvider
    Note:
    Change the database name and store path.
    If you get message “FILESTREAM file 'RBSFilestreamFile' cannot be added because its destination filegroup cannot have more than one file.”
    Ignore it.
    v.     
    Install the RBS client library on each Web server
    To install the RBS client library on the on the first Web server
    Open SharePoint Web server
    Open command prompt.
    Execute following command
    msiexec /qn /lvx* rbs_install_log.txt /i RBS.msi TRUSTSERVERCERTIFICATE=true FILEGROUP=PRIMARY DBNAME=<Database name> DBINSTANCE=<Database server> FILESTREAMFILEGROUP=RBSFilestreamProvider FILESTREAMSTORENAME=FilestreamProvider_1
    Note:
    Change the database name and database instance name.
    DB instance should be <server name\instance name>
    Download RBS.msi for respective SQL version.
    To install the RBS client library on all additional Web and application serversOpen SharePoint Web server
    Open command prompt.
    Execute following command
    msiexec /qn /lvx* rbs_install_log.txt /i RBS.msi DBNAME=<Database name> DBINSTANCE=<Database server> ADDLOCAL=Client,Docs,Maintainer,ServerScript,FilestreamClient,FilestreamServer
    Note:
    Change the database name and database instance name.
    DB instance should be <server name\instance name>
    vi.     
    Enable RBS for each content database
    You must enable RBS on one Web server in the SharePoint farm. It is not important which Web server that you select for this activity. You must perform this procedure once for each content database.
    Open SharePoint web server
    Open SharePoint PowerShell
    Execute below script
    $cdb = Get-SPContentDatabase <Database name>
    $rbss = $cdb.RemoteBlobStorageSettings
    $rbss.Installed()
    $rbss.Enable()
    $rbss.SetActiveProviderName($rbss.GetProviderNames()[0])
    $rbss
    Note: Change the database name.
    vii.     
    Test the RBS installation
    On the computer that contains the RBS data store.
    Browse to the RBS data store directory.
    Confirm that size of RBS data store directory.
    On the SharePoint farm, upload a file that is at least 100 kilobytes (KB) to a document library.
    On the computer that contains the RBS data store.
    Browse to the RBS data store directory.
    Confirm that size of RBS data store directory.
    It must be more than before.
    viii.     
    Test user logging
    Logging with the 2-3 different users and test they can able to logging.
    f.    
    Migrate RBLOB from RBS to SQL database and completely remove RBS
    i.     
    Migrate all content from RBS to SQL and disable RBS for content DB:
    Open SharePoint server.
    Open SharePoint management PowerShell
    Execute below script
    $cdb=Get-SPContentDatabase <Database name>
    $rbs=$cdb.RemoteBlobStorageSettings
    $rbs.GetProviderNames()
    $rbs.SetActiveProviderName("")
    $rbs.Migrate()
    $rbs.Disable()
    Note:
    Migrate() might take some time depending on amount of data in your RBS store.
    Change the database name.
    If you get message on the PowerShell “PS C:\Users\sp2010_admin> $rbs.Migrate()
    Could not read configuration for log provider <ConsoleLog>. Default value used.
    Could not read configuration for log provider <FileLog>. Default value used.
    Could not read configuration for log provider <CircularLog>. Default value used.
    Could not read configuration for log provider <EventViewerLog>. Default value used.
    Could not read configuration for log provider <DatabaseTableLog>. Default value used.” Then wait for while it will take some time to start migration.”
    ii.     
    Change the default RBS garbage collection window to 0 on your content DB:
    Open SQL server
    Open SQL management studio
    Select your content DB and open new query window
    Execute below SQL query
    exec mssqlrbs.rbs_sp_set_config_value ‘garbage_collection_time_window’,'time 00:00:00′
    exec mssqlrbs.rbs_sp_set_config_value ‘delete_scan_period’,'time 00:00:00′
    Note:
    Run one by one SQL query
    You will get “Command(s) completed successfully.” Message
    iii.     
    Run RBS Maintainer (and disable the task if you scheduled it):
    Open SharePoint server
    Open command prompt
    Run below command
    "C:\Program Files\Microsoft SQL Remote Blob Storage 10.50\Maintainer\Microsoft.Data.SqlRemoteBlobs.Maintainer.exe" -connectionstringname RBSMaintainerConnection -operation GarbageCollection ConsistencyCheck ConsistencyCheckForStores -GarbageCollectionPhases
    rdo -ConsistencyCheckMode r -TimeLimit 120
    iv.     
    Uninstall RBS:
    Open SQL server
    Open SQL management studio
    On your content DB run below SQL query
    exec mssqlrbs.rbs_sp_uninstall_rbs 0
    Note:
    If you will get message “The RBS server side data cannot be removed because there are existing BLOBs registered. You can only remove this data by using the force_uninstall parameter of the mssqlrbs.rbs_sp_uninstall stored pro” then run this “exec mssqlrbs.rbs_sp_uninstall_rbs
    1 ”
    You will get “Command(s) completed successfully.” Message.
    v.     
    Uninstall from add/remove SQL Remote Blob Storage.
    I found that there were still FILESTREAM references in my DB, so remove that reference
    Open SQL server
    Open SQL management studio
    Run below SQL query on your content DB:
    ALTER TABLE [mssqlrbs_filestream_data_1].[rbs_filestream_configuration] DROP column [filestream_value]
    ALTER TABLE [mssqlrbs_filestream_data_1].[rbs_filestream_configuration] SET (FILESTREAM_ON = "NULL")
    Note:
    Run one by one SQL query
    vi.     
    Now you can remove the file and filegroup for filestream:
    Open SQL server
    Open SQL management studio
    Open new query window on top
    Execute below SQL query
    ALTER DATABASE <Database name> Remove file RBSFilestreamFile;
    Note:
    Change the database name
    If it gives message “The file 'RBSFilestreamFile' cannot be removed because it is not empty.” Then remove all table prefix with “mssqlrbs_” from your database and execute SQL query again.
    This query takes time as per your database size (almost 30 min).
    You will get “The file 'RBSFilestreamFile' has been removed.” Message
    Execute below SQL query
    ALTER DATABASE <Database name> REMOVE FILEGROUP RBSFilestreamProvider;
    Note:
    Change the database name
    You get “The filegroup 'RBSFilestreamProvider' has been removed.” Message.
    Or If you get “Msg 5524, Level 16, State 1, Line 1 Default FILESTREAM data filegroup cannot be removed unless it's the last
    FILESTREAM data filegroup left.” message. Then ignore this message.
    vii.     
    Remove Blob Store installation
    Open SharePoint server
    Run RBS.msi setup file and choose Remove option.
    Finish wizard.
    viii.     
    Disable FILESTREAM in SQL Configuration Manager
    Disable FILESTREAM in SQL Configuration Manager for your instance (if you do not use it anywhere aside this single content DB with SharePoint), run SQL reset and IIS reset and test.
    ix.     
    Test the RBS Removed or not?
    On the computer that contains the SQL database.
    Confirm that size of SQL database (.mdf file).
    On the SharePoint farm, upload a file that is at least 100 kilobytes (KB) to a document library.
    On the computer that contains the SQL database.
    Confirm that size of SQL database.
    It must be more than before. If there is no difference then ignore it. Just check it Store is no more in SQL.
    x.     
    Test user logging
    Logging with the 2-3 different users and test they can able to logging.
    g.   
    Convert classic-mode web applications to claims-based authentication
    i.     
    Open SharePoint server
    ii.     
    Open SharePoint PowerShell
    iii.     
    Execute below script
    $WebAppName = "<URL>"
    $wa = get-SPWebApplication $WebAppName
    $wa.UseClaimsAuthentication = $true
    $wa.Update()
    $account = "<Domain name\User name>"
    $account = (New-SPClaimsPrincipal -identity $account -identitytype 1).ToEncodedString()
    $wa = get-SPWebApplication $WebAppName
    $zp = $wa.ZonePolicies("Default")
    $p = $zp.Add($account,"PSPolicy")
    $fc=$wa.PolicyRoles.GetSpecialRole("FullControl")
    $p.PolicyRoleBindings.Add($fc)
    $wa.Update()
    $wa.MigrateUsers($true)
    $wa.ProvisionGlobally()
    iv.     
    Test user logging
    Logging with the 2-3 different users and test they can able to logging.
    h.  
    Take SQL backup from QA server
    i.     
    Open SQL server
    ii.     
    Open management studio on SQL server
    iii.     
    Select the content database
    iv.     
    Take back up of content database
    Information: This SQL backup is not content RBS.
    3.   
    New SharePoint 2013 environment with SQL 2012
    a.   
    Restore SQL backup
    i.     
    Open SQL server
    ii.     
    Open SQL management studio
    iii.     
    Restore the SQL database using *.bak file
    b.  
    Dismount database which is with existing application
    i.     
    Open SharePoint server
    ii.     
    Open SharePoint management PowerShell
    iii.     
    Execute below script
    Dismount-SPContentDatabase <Database name>
    Note: change the database name which bind with existing application.
    c.   
    Mount restored database with existing application
    i.     
    Open SharePoint server
    ii.     
    Open SharePoint management PowerShell
    iii.     
    Execute below script
    Mount-SPContentDatabase <Database name> -DatabaseServer <Database server name> -WebApplication <URL>
    Note:
    Change the database name with new restored database name
    Change the database server name in form of “DB server name\DB instance name”
    Change the URL of web application
    This command take some time.
    d.  
    Upgrade site collection
    i.     
    Open SharePoint server
    ii.     
    Open new site
    iii.     
    You will find message on top “Experience all that SharePoint 15 has to offer. Start now or Remind me later”
    iv.     
    Click on “Start”
    v.     
    Click on ”Upgrade this Site Collection”
    vi.     
    Click on “I Am ready”
    vii.     
    After some time you will get message “Upgrade Completed Successfully”
    viii.     
    Test User logging.

  • Data S Default data transfer Options

    Hi
    What do the below mentioned terms mean in infopackage-Schedular--Data S Default data transfer
    1Max size of datapakcet in kbytes
    2Max no of dailogues process for sending data
    3No of data packet per info idoc
    Please help me with an example and what importance it has when we increase/ decrease the lengh of the above 3 mentioned do we have any interconnection in all three or they are all independent.
    Thanks
    Puneet

    Hello Puneet,
    These are some standard BW Settings done in transaction SPRO.
    SPRO ->SAP Customizing Implementation Guide->SAP NetWeaver->SAP Business Information Warehouse->Links to Other Systems->Maintain Control Parameters for the data transfer
    Maximum size of a data packet in kilo bytes.
    The individual records are sent in packages of varying sizes in the data transfer to the Business Information Warehouse. Using these parameters you determine the maximum size of such a package and therefore how much of the main memory may be used for the creation of the data package.
    SAP recommends a data package size between 10 and 50 MB.
    Frequency with which status Idocs are sent
    With this frequency you establish how many data IDocs should be sent in an Info IDoc.
    Maintain Control Parameters for the data transfer
    Standard settings
    For SAP source systems, you change the control parameter settings in the transaction SBIW (Customizing for Extractors), under Business Information Warehouse -> General Settings -> Control Parameters -> Maintain Control Parameters for Data Transfer .
    Activities
    1. Maximum size of data packages
    For data transfer into BW, the individual data records are sent in packages of variable size. You use these parameters to control how large such a data package typically is. If no entry is maintained, the data is transferred with a standard setting of 10,000 kbytes per data package. The memory requirement depends not only on the setting for data package size, but also on the width of the transfer structure, and the memory requirement of the  relevant extractor.
    2. Frequency
    With the specified frequency, you detemine after how many data IDocs an Info IDoc is sent, or how many data IDocs are described by an Info IDoc.
    The frequency is set to 1 by default. This means that an Info IDoc follows every data IDoc. Generally, choose a frequency of between 5 and 10, but not greater than 20.
    The larger the package size of a data IDoc, the lower you must set the frequency. In this way you ensure that, when loading data, you receive information on the current data load status at relatively short intervals.
    In the BW Monitor you can use each Info IDoc to see whether the loading process is running without errors. If this is the case for all the data IDocs in an Info IDoc, then the traffic light in the Monitor is green. One of the things the Info IDocs contain information on, is whether the current data IDocs have been loaded correctly.
    3. Size of a PSA partition
    Here, you can set the number of records at which a new partition is generated. This value is set to 1.000.000 records as standard.
    When you are integrating with Other SAP Components then
    SPRO ->SAP Customizing Implementation Guide->Integration with Other SAP Components->Data Transfer to the SAP Business Information Warehouse->General Settings->Maintain Control Parameters for the data transfer
    Maintain Control Parameters for Data Transfer
    Activities
    1. Source System
    Enter the logical system of your source client and assign the control parameters you selected to it.
    You can find further information on the source client in the source system by choosing the path Tools -> Administration -> Management -> Client Maintenance.
    2. Maximum Size of the Data Package
    When you transfer data into BW, the individual data records are sent in packages of variable size. You can use these parameters to control how large a typical data packet like this is.
    If no entry was maintained then the data is transferred with a default setting of 10,000 kBytes per data packet. The memory requirement not only depends on the settings of the data package, but also on the size of the transfer structure and the memory requirement of the relevant extractor.
    3. Maximum Number of Rows in a Data Package
    With large data packages, the memory requirement mainly depends on the number of data recrods that are transferred with this package. Using this parameter you control the maximum number of data records that the data package should contain.
    By default a maximum of 100,000 records are transferred per  data package.
    The maximum main memory requiremen per data package is approximately 2'Max. Rows'1000 Byte.
    4. Frequency
    The specified frequency determines the number of IDocs that an Info IDoc is to be sent to, or how many data IDocs an Info Idoc describes.
    Frequency 1 is set by default.. This means that an Info Idoc follows every data Idoc. In general, you should select a frequency between 5 and 10 but no higher than 20.
    The bigger the data IDoc packet, the lower the frequency setting should be. In this way, when you upload you can obtain information on the respective data loading in relatively short spans of time .
    With the help of every Info IDoc, you can check the BW monitor to see if there are any errors in the loading process. If there are none, then the traffic light in the monitor will be green. The Info IDocs contain information such as whether the respective data IDocs were uploaded correctly.
    5. Maximum number of parallel processes for the data transfer
    An entry in this field is only relevant from release 3.1I onwards.
    Enter a number larger than 0. The maximum number of parallel processes is set by default at 2. The ideal parameter selection depends on the configuration of the application server, which you use for transferring data.
    6. Background job target system
    Enter the name of the application server on which the extraction job is to be processed.
    To determine the name of the application server, choose Tools -> Administration -> Monitor -> System monitoring -> Server. The name of the application server is displayed in the column Computer.
    7. Maximum Number of Data Packages in a Delta Request
    With this parameter, you can restrict the number of data packages in a delta request or in the repetition of a delta request.
    Only use this restriction when you expect delta requests with a very high data volume, so that, despite sufficiently large data package sizes, more than 1000 data packages can result in a request.
    With an initial value or when the value is 0, there is no restriction. Only a value larger than 0 leads to a restriction in the number of data packages. For reasons of consistency, this number is not generally exactly adhered to. The actual restriction can, depending on how much the data is compressed in the qRFC queue , deviate from the given limit by up to 100.
    Thanks
    Chandran

  • LDAP Authentication in a separate data store

    I am running AM 7.0 in Legacy Mode. I want to have a sub-organization (or realm) do its authentication in a different LDAP from the one AM uses as its data store. I have done the same successfully in AM6.1.
    I modified the LDAP Authentication module in the realm to point to the other LDAP. I can now log in to the sub-organization/realm against the secondary LDAP. However, when AM searches for attributes after the login, it uses the search dn that is specified for the LDAP module. What I want it to do is to use the main AM repository for attributes, roles, etc., and only validate credentials against the remote LDAP.
    In 6.1 this was the default behavior. Modifying LDAP Authentication did not affect other behaviors of AM, only the authentication.
    Advice?

    do you have "Return User DN to Authenticate" enabled on the LDAP module? If so, turn it off and see what happens.

  • Error while viewing data from data store

    Hello Gurus,
    We are facing issue with driver when we try to view data from a data store related to Hyperion Essbase technology.
    ODI version is 11.1.1.6.
    Following is the error that we are getting:
    java.lang.IllegalArgumentException: Driver name cannot be empty
         at org.springframework.util.Assert.hasText(Assert.java:161)
         at com.sunopsis.sql.SnpsConnection.setDriverName(SnpsConnection.java:302)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.setDefaultConnectDefinition(DwgConnectConnection.java:380)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.<init>(DwgConnectConnection.java:274)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.<init>(DwgConnectConnection.java:288)
         at oracle.odi.core.datasource.dwgobject.support.DwgConnectConnectionCreatorImpl.createDwgConnectConnection(DwgConnectConnectionCreatorImpl.java:53)
         at com.sunopsis.graphical.frame.edit.EditFrameTableData.snpsInitializeSnpsComponentsSpecificRules(EditFrameTableData.java:85)
         at com.sunopsis.graphical.frame.SnpsEditFrame.snpsInitialize(SnpsEditFrame.java:1413)
         at com.sunopsis.graphical.frame.edit.AbstractEditFrameGridBorland.initialize(AbstractEditFrameGridBorland.java:623)
         at com.sunopsis.graphical.frame.edit.AbstractEditFrameGridBorland.<init>(AbstractEditFrameGridBorland.java:868)
         at com.sunopsis.graphical.frame.edit.EditFrameTableData.<init>(EditFrameTableData.java:50)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
         at oracle.odi.ui.editor.AbstractOdiEditor$1.run(AbstractOdiEditor.java:176)
         at oracle.ide.dialogs.ProgressBar.run(ProgressBar.java:655)
         at java.lang.Thread.run(Thread.java:662)Is there any specific JAR file related to Hyperion Essbase ?
    and where do we find the default drivers that come with ODI?
    Please help.
    Thanks,
    Santy.

    You cannot view the data from an essbase data store as it isn't configured with a jdbc driver that supports this function

  • Olap type in data store properties

    Hello,
    Can you please tell me what is the purpose of defining the OLAP type (dimension, fact table, SCD) in the data store properties in ODI? I'm already familiar with dimensions & fact tables as used in data warehousing modeling. I am just wondering how specifying the OLAP type of a data store affects its behavior.
    Thank you,
    Anju

    If you set one of these properties on each column, then you can use a KM like this one : IKM Oracle Slowly Changing Dimension and it will implement SCD by itself.
    For example if you have this source table and a target table with the same structure + two columns for the date (start and end) :
    Source table PRODUCT :
    ID .... BID ... NAME ..... PRICE
    1 ..... 10 ..... ODI ....... 10
    You set your properties onto your target datastore : ID as a surrogate key, BID as natural key, NAME as Overwrite on Change, PRICE as Add Row On Change, START as Starting Timestamp and finally END as Ending Timestamp.
    You execute the interface a first time. You target datastore will contain something like this :
    ID .... BID ... NAME ..... PRICE .... START ........... END
    1 ..... 10 .... ODI ........ 10 ......... 30/10/2012 ... 31/12/3000
    The next day, change the name of the product to OWB in your source and execute the interface again. Your target will look like this :
    ID .... BID ... NAME ..... PRICE .... START ........... END
    1 ..... 10 .... OWB ........ 10 ......... 30/10/2012 ... 31/12/3000
    The next day, change the price of the product to set it to 20. Your target will look like this :
    ID .... BID ... NAME ..... PRICE .... START ................. END
    1 ..... 10 .... OWB ........ 10 ......... 30/10/2012 ........ *31/10/2012*
    *1 ..... 10 .... OWB ........ 20 ......... 01/11/2012 ...... 31/12/3000*
    This should help you but I'll try to look after some documentation. If I don't find it I'll write an article by myself next week ;).
    Hope it helps.
    Regards,
    Jerome

  • 838 Cannot get data store shared memory segment error in Timesten

    Hi Chris,
    This is shalini. I have mailed u for last two days reg this issue. You asked me about few details. Here the answer is,
    1, Have u modified anything after timesten installation? - No. I didnt change anything .
    2, What are the three values under physical memory? - Total - 2036 MB ,Cached - 1680 MB ,Free - 12 MB
    3, ttmesg.log and tterrors.log? -
    tterrors.log::
    12:48:58.83 Warn: : 1016: 3972 Connecting process reports creation failure
    ttmesg.log::
    12:48:58.77 Info: : 1016: maind got #12.14, hello: pid=3972 type=library payload=%00%00%00%00 protocolID=TimesTen 11.2.1.3.0.tt1121_32 ident=%00%00%00%00
    12:48:58.77 Info: : 1016: maind: done with request #12.14
    12:48:58.77 Info: : 1016: maind got #12.15 from 3972, connect: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 user=InstallAdmin pass= dbdev= logdev= logdir=c:\timesten\tt1121_32\logs\my_ttdb grpname= access=%00%00%00%00 persist=%00%00%00%00 flags=@%00%00%01 newpermsz=%00%00%00%02 newtempsz=%00%00%00%02 newpermthresh=Z%00%00%00 newtempthresh=Z%00%00%00 newlogbufsz=%00%00%00%02 newsgasize=%00%00%00%02 newsgaaddr=%00%00%00%00 autorefreshType=%ff%ff%ff%ff logbufparallelism=%00%00%00%00 logflushmethod=%00%00%00%00 logmarkerinterval=%00%00%00%00 connections=%03%00%00%00 control1=%00%00%00%00 control2=%00%00%00%00 control3=%00%00%00%00 ckptrate=%06%00%00%00 connflags=%00%00%00%00 newlogfilesz=%00%00@%01 skiprestorecheck=%00%00%00%00 realuser=InstallAdmin conn_name=my_ttdb ckptfrequency=X%02%00%00 ckptlogvolume=%14%00%00%00 recoverythreads=%03%00%00%00 reqid=* plsql=%ff%ff%ff%ff receiverThreads=%00%00%00%00
    12:48:58.77 Info: : 1016: 3972 0266E900: Connect c:\timesten\tt1121_32\data\my_ttdb\my_ttdb a=0x0 f=0x1000040
    12:48:58.77 Info: : 1016: permsize=33554432 tempsize=33554432
    12:48:58.77 Info: : 1016: logbuffersize=33554432 logfilesize=20971520
    12:48:58.77 Info: : 1016: permwarnthresh=90 tempwarnthresh=90 logflushmethod=0 connections=3
    12:48:58.77 Info: : 1016: ckptfrequency=600 ckptlogvolume=20 conn_name=my_ttdb
    12:48:58.77 Info: : 1016: recoverythreads=3 logbufparallelism=0
    12:48:58.77 Info: : 1016: plsql=0 sgasize=33554432 sgaaddr=0x00000000
    12:48:58.77 Info: : 1016: control1=0 control2=0 control3=0
    12:48:58.79 Info: : 1016: ckptrate=6 receiverThreads=0
    12:48:58.79 Info: : 1016: 3972 0266E900: No such data store
    12:48:58.79 Info: : 1016: daDbConnect failed
    12:48:58.79 Info: : 1016: return 1 833 'no such data store!' arg1='c:\timesten\tt1121_32\data\my_ttdb\my_ttdb' arg1type='S' arg2='' arg2type='S'
    12:48:58.79 Info: : 1016: maind: done with request #12.15
    12:48:58.79 Info: : 1016: maind got #12.16 from 3972, create: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 user=InstallAdmin pass= dbdev= logdev= logdir=c:\timesten\tt1121_32\logs\my_ttdb grpname= persist=%00%00%00%00 access=%00%00%00%00 flags=@%00%00%01 permsize=%00%00%00%02 tempsize=%00%00%00%02 permthresh=Z%00%00%00 tempthresh=Z%00%00%00 logbufsize=%00 %00%02 logfilesize=%00%00@%01 shmsize=8%f4%c9%06 sgasize=%00%00%00%02 sgaaddr=%00%00%00%00 autorefreshType=%01%00%00%00 logbufparallelism=%00%00%00%00 logflushmethod=%00%00%00%00 logmarkerinterval=%00%00%00%00 connections=%03%00%00%00 control1=%00%00%00%00 control2=%00%00%00%00 control3=%00%00%00%00 ckptrate=%06%00%00%00 connflags=%00%00%00%00 inrestore=%00%00%00%00 realuser=InstallAdmin conn_name=my_ttdb ckptfrequency=X%02%00%00 ckptlogvolume=%14%00%00%00 recoverythreads=%03%00%00%00 reqid=* plsql=%00%00%00%00 receiverThreads=%00%00%00%00
    12:48:58.79 Info: : 1016: 3972 0266E900: Create c:\timesten\tt1121_32\data\my_ttdb\my_ttdb p=0x0 a=0x0 f=0x1000040
    12:48:58.79 Info: : 1016: permsize=33554432 tempsize=33554432
    12:48:58.79 Info: : 1016: logbuffersize=33562624 logfilesize=20971520
    12:48:58.80 Info: : 1016: shmsize=113898552
    12:48:58.80 Info: : 1016: plsql=0 sgasize=33554432 sgaaddr=0x00000000
    12:48:58.80 Info: : 1016: permwarnthresh=90 tempwarnthresh=90 logflushmethod=0 connections=3
    12:48:58.80 Info: : 1016: ckptfrequency=600 ckptlogvolume=20 conn_name=my_ttdb
    12:48:58.80 Info: : 1016: recoverythreads=3 logbufparallelism=0
    12:48:58.80 Info: : 1016: control1=0 control2=0 control3=0
    12:48:58.80 Info: : 1016: ckptrate=6, receiverThreads=1
    12:48:58.80 Info: : 1016: creating DBI structure, marking in flux for create by 3972
    12:48:58.82 Info: : 1016: daDbCreate: about to call createShmAndSem, trashed=-1, panicked=-1, shmSeq=1, name c:\timesten\tt1121_32\data\my_ttdb\my_ttdb
    12:48:58.82 Info: : 1016: marking in flux for create by 3972
    12:48:58.82 Info: : 1016: create.c:338: Mark in-flux (now reason 1=create pid 3972 nwaiters 0 ds c:\timesten\tt1121_32\data\my_ttdb\my_ttdb) (was reason 1)
    12:48:58.82 Info: : 1016: maind: done with request #12.16
    12:48:58.83 Info: : 1016: maind got #12.17 from 3972, create complete: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 connid=%00%08%00%00 success=N
    12:48:58.83 Info: : 1016: 3972 0266E900: CreateComplete N c:\timesten\tt1121_32\data\my_ttdb\my_ttdb
    12:48:58.83 Warn: : 1016: 3972 Connecting process reports creation failure
    12:48:58.83 Info: : 1016: About to destroy SHM 560
    12:48:58.83 Info: : 1016: maind: done with request #12.17
    12:48:58.83 Info: : 1016: maind 12: socket closed, calling recovery (last cmd was 17)
    12:48:58.85 Info: : 1016: Starting daRecovery for 3972
    12:48:58.85 Info: : 1016: 3972 ------------------: process exited
    12:48:58.85 Info: : 1016: Finished daRecovery for pid 3972.
    I think "Connecting process reports creation failure" is the error shown.
    4, DSN Attributes? -
    earlier, i used my_ttdb which is the SYSTEM DSN , and i tried by creating a user DSN, still it is not working. I will give the my_ttdb dsn attributes.
    I am unable to attach the screenshots in this message. Is there anyway to attach it. I should not send the reply through mail from my company, so only sent this message. You can reply to my mailbox id.
    Chris or Anybody can please reply to this below message.
    I am having a table called Employee. I have created a view called emp_view.
    create view emp_view as
    select /*+ INDEX (Employee emp_no) */
    emp_no,emp_name
    from Employee;
    we have another index on Employee table called emp_name.
    I need to use the emp_name index in emp_view like below.
    select /*+ INDEX (<from employee tables> emp_name) */
    emp_no
    from emp_view
    where emp_name='SSS';
    in the index i tried /*+ INDEX (emp_view.Employee emp_name) */ But it is still taking the index used in emp_view view ie emp_no.. Anyone can u please help me to resolve this issue.
    Edited by: user12154813 on Nov 3, 2009 4:21 AM

    DSN Attributes for the two paths u gave are mentioned below.
    HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\mt_ttdb:
    (Default) - (value not set)
    AssertDlg - 1
    AutoCreate -1
    AutorefreshType -1
    CacheGridEnable -0
    CacheGridMsgWait -60
    CatchupOverride -0
    CkptFrequency -600
    CkptLogVolume -20
    CkptRate -6
    ConnectionCharacterSet -US7ASCII
    Connections -3
    Control1 -0
    Control2 -0
    Control3 -0
    DatabaseCharacterSet -US7ASCII
    DataStore - C:\TimesTen\tt1121_32\Data\my_ttdb\my_ttdb
    DDLCommitBehavior -0
    Description - My Timesten Data store
    Diagnostics -1
    Driver - C:\TimesTen\tt1121_32\bin\ttdv1121.dll
    DuplicateBindMode -0
    DurableCommits -0
    DynamicLoadEnable -1
    DynamicLoadErrorMode -0
    ExclAccess -0
    ForceConnect -0
    InRestore -0
    Internal1 -0
    Isolation -1
    LockLevel -0
    LockWait -10.0
    LogAutoTruncate -1
    LogBuffSize -0
    LogBufMB -32
    LogBufParallelism -0
    LogDir -C:\TimesTen\tt1121_32\Logs\my_ttdb
    LogFileSize -20
    LogFlushMethod -0
    Logging -1
    LogMarkerInterval -0
    LogPurge -1
    MatchLogOpts -0
    MaxConnsPerServer -4
    NLS_LENGTH_SEMANTICS -BYTE
    NLS_NCHAR_CONV_EXCP -0
    NLS_SORT -BINARY
    NoConnect -0
    Overwrite -0
    PassThrough -0
    PermSize -32
    PermWarnThreshold -0
    PLSQL - value (-1)
    PLSQL_CODE_TYPE -INTERPRETED
    PLSQL_CONN_MEM_LIMIT - 100
    PLSQL_MEMORY_SIZE - 32
    PLSQL_OPTIMIZE_LEVEL - 2
    PLSQL_TIMEOUT - 30
    Preallocate -0
    PrivateCommands -0
    QueryThreshold - value (-1)
    RACCallback -1
    ReadOnly -0
    ReceiverThreads -0
    RecoveryThreads -3
    SkipRestoreCheck -0
    SMPOptLevel - value(-1)
    SQLQueryTimeout - value(-1)
    Temporary -0
    TempSize -32
    TempWarnThreshold - 0
    ThreadSafe -1
    TransparentLoad- value(-1)
    TypeMode -0
    WaitForConnect -1
    XAConnection -0
    HKEY_CURRENT_USER\SOFTWARE\ODBC\ODBC.INI\my_test:
    existing values changed from the above dsn attributes:
    TypeMode - value(-1)
    RecoveryThreads -0
    MaxConnsPerServer -5
    LogFileSize -4
    LogDir -C:\TimesTen\tt1121_32\Logs\my_test
    DataStore - C:\TimesTen\tt1121_32\data\my_test\my_test
    DatabaseCharacterSet -AL32UTF8
    ConnectionCharacterSet -AL32UTF8
    CkptFrequency - value(-1)
    CkptLogVolume - value(-1)
    CkptRate -value(-1)
    Connections -0
    CacheGridEnable -1
    newly added values:
    ServerStackSize - 10
    ServersPerDSN -2
    other attributes are same for both the dsn's.
    Please reply as soon as possible.
    Thanks,
    Shalini.
    Edited by: user12154813 on Nov 3, 2009 11:34 PM
    Edited by: user12154813 on Nov 3, 2009 11:37 PM

  • How create data store with PermSize 512MB on WIN32?

    Hi!
    How create data store with PermSize > 512MB on WIN32? If I set PermSize > 512MB on WIN32, then data store becomes invalid.

    Thanks for the details. As I mentioned, due to issues with the way Windows manages memory and address space it is generally not possible to create a datastore larger than around 700 Mb on WIN32. Sometimes you may be lucky and get close to 1 GB but usually not. The issue is as follows; on Windows, a TimesTen datastore is a shared mapping created from memory backed by the paging file. This shared mapping must be mapped into the process address space as a contiguous range of addresses. So, if you have a 1 GB datastore then your process needs to have a contiguous 1 GB range of addresses free in order to be able to connect to (map) the datastore. Unfortunately the default behaviour of Windows is to map DLLs into a process address space all over the place and any process that uses any significant number of DLLs is very unlikely to have a contiguous free address range larger than 500-700 Mb.
    This problem does not exist with other O/S such as Unix or Linux nor does it exist with 64-bit Windows. So, if you need to use a cache or datastore larger than around 700 Mb you need to use either 64-it Windows or another O/S. Note that even on 32-bit Linux/Unix TimesTen datastores are limited to a maximum size of 2 GB. If you need more than 2 GB you need to use a 64-bit O/S.
    Chris

Maybe you are looking for