Database loading into swap?

I recently took over a database server running Solaris 9 with 17 9.2.0.6 databases running 24/7. It has 8 single core processors running at 1.2Ghz with 16GB or RAM. We upgraded the RAM to 32GB within the last few months. Since we had so much extra RAM now I was able to do some research and find some instances that could best benefit from the added RAM boost. So I started allocating the RAM to my instances and I noticed that my swap was dropping close to if not exactly to the numbers I was increasing my SGA to. I guess the only reason I noticed is because my box ran out of swap space and we were getting errors. I would like to know is this normal for it to take up an equal amount of swap to physical RAM allocated?
Also on this box doing a simple vmstat command I see that my Page Ins are constantly between 50,000-100,000. I’m guessing this is because it seems like everything is being loaded in the swap instead of the physical RAM. The only thing is… performance isn’t really a problem on this server so it’s baffling to me. Any help would be greatly appreciated.
Thanks.
Luke

On Solaris you should be concerned over Scan Rate (sr in vmstat or use sar -g )
If scan rate is constantly over 200pages/s, that could indicate a problem.
Paging and Swaping are different, paging is not necessarily a bad thing on Solaris, Swapping should be avoided.
Paging only swap out the not recently used pages based on some OS parameters (lotsfree, maxpgio, throttlefree etc), while swapping swaps out all memory associated with a process.
I’m guessing this is because it seems like everything is being loaded in the ?
swap instead of the physical RAM. Rest assured that everything can't be loaded in swap instead of RAM. It only happens when there's no free space in RAM (consistently below desfree)

Similar Messages

  • CDC Error (JOURNALIZED DATA is not loading into tartet database)

    HI,
    I have enabled source database for CDC and got green mark on source database model tables.
    while inserting data into source, J$ tables updating JRN_SUBSCRIBER and other filed.
    when I run the package/ interface JOURNALIZED DATA is not loading into target database.
    i have implemented cdc for 7 source table.
    and using JKM MSSQL Simple
    and enable JOURNALIZED DATA in interface level.
    and
    source database is : MSSQL Server
    Target Database : Oralce 11g.
    please advice me.
    thanks in advance.
    Zakeer Hussain

    Zakeer look into this link , -http://odiexperts.com/?p=1096 . Hope this helps.
    also before running Can you right click on the Source Datastore and click on Journal Data and can you see the data ? and if still the data is not passing through ,in that case make temporary objects t- yes in LKM , IKM and debug and see at which step data is not flowing through and look if there is any filter or condition which is stopping it .
    Still not able to figure out please tell us which step the data is not flowing through we will try to guide you.

  • How to pre - load all database rows into cache

    Hi All,
    The below is my cache configuration, I would like to know how to load all the database rows/specified number of rows into the cache.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>TableEmp</cache-name>
    <scheme-name>distributed-hibernate</scheme-name>
    <init-params>
    <init-param>
    <param-name>entityname</param-name>
    <param-value>com.tangosol.examples.explore.Emp</param-value>
    </init-param>
    </init-params>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-hibernate</scheme-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme></local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.tangosol.coherence.hibernate.HibernateCacheStore
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{entityname}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    Please kindly provide a solution.
    Regards
    S

    Hi Rich,
    Imagine I have just downloaded coherence, I have run a server with the default config. From what you said to S coherence can pull the data from database itself WITHOUT me having to push it to coherence? If so can you please explain how this done, or point me at a guide?You might start with [Read-Through Caching|http://coherence.oracle.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+and+Refresh-Ahead+Caching#Read-Through%2CWrite-Through%2CWrite-BehindandRefresh-AheadCaching-ReadThroughCache] to understand how Coherence can pull data. It is the implementation of a CacheLoader that enables the Coherence cache to pull the data.
    The cache configuration that S provided specifies a read-write-backing-map-scheme indicating that HibernateCacheStore class should be used by Coherence and is similar to the configuration discussed at [Using Hibernate as a CacheStore for Coherence|http://wiki.tangosol.com/display/COH34UG/Using+Hibernate+as+a+CacheStore+for+Coherence]. In responding to the original question, I was assuming that the data source being queried to be loaded into the cache is the same as the data source fronted by the Hibernate configuration.
    Secondly with the respects to the answer to my question. If I don't care about versioning ... do I need a EvolvablePortableObject? If you really don't want to version your serialized representations, you can implement the PortableObject interface instead but the additional cost of implementing EvolvablePortableObject is small and the potential benefit is great.
    So my question is, can coherence pull the data from the database using a preload request and serialize into a pof format without me having to push the data to coherence via a separate app? And if so could you please explain how? Or direct me at some documentation?You do not need to push data to Coherence via a separate app. Coherence can pull the data from the database. Coherence can also preload the cache using an EntryProcessor. You can configure Coherence to use POF and will need to implement POF serialization methods for your cache objects.
    The [Partitioned cache with a serializer|http://coherence.oracle.com/display/COH34UG/Sample+Cache+Configurations#SampleCacheConfigurations-Partitionedcacheofadatabase] example and the links it provides should provide sufficient documentation for configuring and using POF.
    Whether you decide to use the HibernateCacheStore, the TopLinkCacheStore or implement your own CacheStore or CacheLoader class to access your data in your database is your decision. You should be able to find sufficient documentation and examples to help you decide how you would like to use Coherence at the [Coherence Knowledge Base|http://wiki.tangosol.com/display/COH/Oracle+Coherence+Knowledge+Base+Home]. I would recommend starting with the [User Guide|http://wiki.tangosol.com/display/COH34UG/Coherence+3.4+Home] if you would like to get a better grasp of the overall architecture.
    Regards,
    Harv

  • How to list the JAR files loaded into the Oracle Database ?

    How to list the JAR files loaded into the Oracle Database ?

    From 11.1 onwards, the below two views are available to identify the jar files loaded into the Database.
    JAVAJAR$
    JAVAJAROBJECTS$
    By querying the JAVAJAR$ view you can know information about the JAR files loaded into the Database and using JAVAJAROBJECTS$ view you can find all the java objects associated with the given JAR file.
    These views are populated everytime you use LOADJAVA with "-jarsasdbobjects" option to load your custom java classes.
    But unfortunately this feature is available only from 11.1 onwards and there is no clear workaround for above in 10.2 or earlier.

  • New "Loading Database Data into Endeca Information Discovery Applications" product enablement video available on YouTube!

    Hi All,
    The Endeca Information Discovery Information Curriculum Development team just added a new video, "Loading Database Data into Endeca Information Discovery Applications", to the Video  Feature Overviews section of the Oracle Endeca Information Discovery YouTube channel.
    The URL for the video is https://www.youtube.com/watch?v=ix6bGULuY1I&list=PLQl6jp6EE5H2K4JDic6bIPA7yZgbi1KAQ .
    Here’s a description of the video tutorial:
    The goal of this video tutorial is to enable you to configure Studio to use data from a JDBC data source and to create your own application in Studio.
    The target audience is Studio administrators who have to configure a JDBC data source in Studio’s Data Source Library.  As a secondary audience, business users will learn how to load data from a JDBC data source and create a discovery application in Studio.
    As always, if you have any ideas for additional videos, please let us know.
    Thanks!

    Hi All,
    The Endeca Information Discovery Information Curriculum Development team just added a new video, "Loading Database Data into Endeca Information Discovery Applications", to the Video  Feature Overviews section of the Oracle Endeca Information Discovery YouTube channel.
    The URL for the video is https://www.youtube.com/watch?v=ix6bGULuY1I&list=PLQl6jp6EE5H2K4JDic6bIPA7yZgbi1KAQ .
    Here’s a description of the video tutorial:
    The goal of this video tutorial is to enable you to configure Studio to use data from a JDBC data source and to create your own application in Studio.
    The target audience is Studio administrators who have to configure a JDBC data source in Studio’s Data Source Library.  As a secondary audience, business users will learn how to load data from a JDBC data source and create a discovery application in Studio.
    As always, if you have any ideas for additional videos, please let us know.
    Thanks!

  • Load into database from a file

    hi,
    i need to know is there any other means other than sql loader ,i import data into the oracle8i database from a file i.e)if i need to load into a server from a client machine
    which does not have oracle 8i client version how could i
    do it.kindly help out. thanks in advance.

    UTL_FILE
    With the UTL_FILE package, your PL/SQL programs can read and write operating system text files. UTL_FILE provides a restricted version of operating system stream file I/O.
    UTL_FILE I/O capabilities are similar to standard operating system stream file I/O (OPEN, GET, PUT, CLOSE) capabilities, but with some limitations. For example, you call the FOPEN function to return a file handle, which you use in subsequent calls to GET_LINE or PUT to perform stream I/O to a file. When file I/O is done, you call FCLOSE to complete any output and free resources associated with the file.
    http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96612/u_file.htm#ARPLS069
    Joel P�rez

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Call EJB From Oracle Stored proc or Database loaded Java?

    Are there any examples of calling an EJB, residing in the OC4J on a machine separate from the DB server,
    from an Oracle PL/SQL stored proc.
    The Stored proc can call java loaded into the DB. That java makes the intial bean context. Or another way if suggested.
    The reason is that I need to use some drivers that I have so far been unable to load directly into the DB using
    loadjava util. I plan on using the driver in the EJB, located on a different machine. But I'd like so know if its possible to
    make the IntialContext call to the EJB container from PL/SQL. Are the OC4J drivers loadable to be
    able to be called from a database loaded java class? ( I might be a little off on my terminology)
    Bob
    [email protected]

    Hi Bob,
    Your question has been previously asked on this forum many times already.
    You can probably find the relevant postings by doing a search of the
    forum archives.
    To summarize those posts, as I understand it, the latest version of OC4J
    (version 9.0.3) contains a "oc4jclient.jar" file (I think that's the name
    of the file), that can be loaded into the Oracle database (using the
    "loadjava" utility), and which allows a java stored procedure to invoke
    methods on an EJB residing in OC4J.
    Please note that I have not tried any of the above -- I am only summarizing
    for you what has already been posted. And like I said before, a search
    of the forum archives will probably give you more details.
    Good Luck,
    Avi.

  • Database Instance Using Swap Space even When There is Plenty of Free RAM

    Hi,
    I am facinga strange issue in oracle on solaris 5.10
    Platform:solaris 5.10
    Database version:11.2.0.1.0
    Our database Instance Using Swap Space even When There is Plenty of Free RAM.
    We have 80GB of physical RAM, 32GB is set to oracle SGA_MAX_SIZE
    memory parameters:
    memory_max_target big integer 32G
    memory_target big integer 32G
    sga_max_size big integer 32G
    sga_target big integer 32G
    but still oracle is taking memory from swap even though we have enough physical memory.
    please find the top output
    load averages: 0.09, 0.18, 0.84 01:07:49
    134 processes: 133 sleeping, 1 on cpu
    CPU states: 99.4% idle, 0.4% user, 0.3% kernel, 0.0% iowait, 0.0% swap
    Memory: 80G real, 69G free, 35G swap in use, 49G swap free
    PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
    1052 oracle 13 59 0 95M 66M sleep 13:16 0.07% oraagent.bin
    10256 oracle 1 59 0 32G 835M sleep 0:00 0.03% oracle
    1227 oracle 15 59 0 78M 56M sleep 4:27 0.02% ocssd.bin
    9524 vector1 112 59 0 1704M 776M sleep 45:04 0.02% java
    988 oracle 38 59 0 119M 92M sleep 3:00 0.02% ohasd.bin
    1297 oracle 1 101 -20 453M 390M sleep 2:47 0.02% oracle
    1765 oracle 1 101 -20 32G 329M sleep 2:45 0.02% oracle
    10258 oracle 1 59 0 32G 345M sleep 0:00 0.02% oracle
    10259 oracle 1 59 0 3400K 2024K cpu/20 0:00 0.01% top
    1777 oracle 1 59 0 32G 342M sleep 1:14 0.01% oracle
    1246 oracle 6 59 0 56M 44M sleep 1:22 0.01% diskmon.bin
    1803 oracle 1 59 0 32G 514M sleep 0:30 0.01% oracle
    1215 oracle 15 59 0 71M 46M sleep 1:12 0.00% cssdagent
    1307 oracle 1 59 0 453M 390M sleep 0:45 0.00% oracle
    1217 oracle 11 59 0 81M 49M sleep 0:35 0.00% orarootagent.bi
    Vmstat output:
    $vmstat 5 5
    kthr memory page disk faults cpu
    r b w swap free re mf pi po fr de sr s3 s3 s3 s3 in sy cs us sy id
    0 0 0 53698448 73900568 20 81 32 6 5 0 1 -0 0 1 -0 2235 1447 1984 1 0 99
    0 0 0 51196160 72037552 1 13 0 0 0 0 0 0 0 0 0 2505 1403 2204 0 0 100
    0 0 0 51193488 72035864 55 380 0 0 0 0 0 0 0 0 0 2487 2143 2203 0 1 99
    0 0 0 51183856 72030176 0 0 0 3 3 0 0 0 0 0 0 2496 1370 2182 0 0 99
    0 0 0 51182648 72029112 22 117 0 0 0 0 0 0 0 0 0 2503 1408 2193 1 0 99
    $cat /etc/release
    Solaris 10 10/09 s10s_u8wos_08a SPARC
    Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 16 September 2009
    I refered the metalink document id: [ID 761960.1]
    Database Instance Using Swap Space When There is Plenty of Free RAM metalink note [ID 761960.1]
    and tried the disabling the sga_max_size and memory_max_target and memory target with all different combination but no use.
    We have one more box where same setup was configured in solaris platform but there the above memory parameters are not disabled but it is not using swap space it takes memry only from physical RAM
    Can any one throw some lights on this issue. It wopuld be great if I come to know where the exact problem
    Thanks in advance
    With Regards,
    Boobathi P

    Hi,
    I didn't say I am not using 11gr2 memory parameters I am using all the automatic memory parameters but the thinks is as per the metalink note I tried by disabling the automatic memory parameters. but that didn't help me in preventing unnecessary swap usage. So I reverted. now it is using automatic memory parameters.
    so What is the solution for this?
    With Regards,
    Boo

  • Why NetWeaver04 DB instance takes more time during 'Database Load' phase?

    Hi,
    I'm installing NetWeaver04 CI and DB instances on Solaris 10 machine and as a standalone installation. CI was installed fine and DB instance install takes forever and i don't know how long it will take because i started this at 17.20 and now it is 22.01. Here is the summary of my install.
    1. Installed CI
    Note: Unicode ABAP install
    2. Ran DB instance install:
    Note:Selected Advanced database options instead of RAW device
    3. Completed Oracle install and continued for DB instance install
    Note: Selected Unicode option and selected Enterprise install
    4. The install so far completed 25 out of 39 tasks and it is waiting for more than 4 hrs at Database Load phase.
    I'm not sure what will be the result of my install. I would like to know whether it is supposed to take that long for DB instance install. Could someone tell me what would be the reason and what would i've missed in the install steps.
    Thanks,
    Ram

    The load is by far the most time consuming operation; you can make this run faster if you have a large amount of hardware by increasing the number of parallel R3trans jobs (default is 3) during the installation.  It's not uncommon for this step to run for several hours if you don't increase this parameter.
    Also, you might watch the oracle alert log; if your archive directory fills up, the load will just hang indefinitly.  If you create your oraarch filesystem with at least 4 GB of space, this shouldn't be an issue during the initial load, but if it's much smaller, you could run into a problem there....
    Rich

  • Automate Metadata loads into EPMA

    Hi,
    My company has 2 systems HCM and EPMA. We are tying to get the data in sync between these 2 systems. I am new to EPMA and was wondering if there is a way to automate the Metadata loads into EPMA? We recently installed ODI but not sure if this can be done VIA ODI. Any help is appreciated.

    Any thoughts on this guys? I'd like to open the question up a bit.
    How is DRM metadata used to source Hyperion applications? I see there is an XML output of metadata. There is some talk about generating an .ads file. What are the other ways? SQL database views?
    I'm using a classic Planning application. Appreciate any thoughts.
    -Chris

  • Automate DRM Metadata load into HFM

    Is there a way to automate a DRM metadata load into HFM? In both instances, we are using the latest Fusion versions and I wanted to know if there is a way to automate loading the flat file produced by DRM into HFM. How would this be done? Does EPMA versus classic HFM setup play a role? If there is no automation, I imagine this would require an admin to manually export from DRM and manually import into HFM.
    Thanks in advance

    Any thoughts on this guys? I'd like to open the question up a bit.
    How is DRM metadata used to source Hyperion applications? I see there is an XML output of metadata. There is some talk about generating an .ads file. What are the other ways? SQL database views?
    I'm using a classic Planning application. Appreciate any thoughts.
    -Chris

  • As being a novice with my Mac Pro computer and Cannon 5D mark 3 camera my issue has arisen while over the last year and half, when down loading into my Photoshop Lightroom 4, my 18,000  photograph were not catalogued correctly, or hardly at all, so now ov

    As being a novice with my Mac Pro computer and Cannon 5D mark 3 camera my issue has arisen while over the last year and half, when down loading into my Photoshop Lightroom 4, my 18,000  photograph were not catalogued correctly, or hardly at all, so now over several months time my photos have apparently been redirected in my libraries and have begun not appearing in their allotted frames next to their number in the library they are in. they also have a small question mark (?) next to the photo space. The frames are blank but indicates that there is a photo there. The majority of photos are blank but about a tenth are seen . i apparently need to retrace where they went but do not know how to get that done . can someone help please?

    It would be difficult to tell you where to begin. If you moved photos around to different folders and didn't use Lightroom to move them, that would cause Lightroom to lose track of where they are. You have to remember that your images are not "in" Lightroom. Lightroom simply points to them wherever they are on your computer. All of the location information as well as the changes that you make to your images is stored in the catalog, which is a database. This catalog is central to the operation of Lightroom, and it's essential that you understand how it works and how to make it work for you. For all of those images that have "?" On them, or the folders that have the same, you will have to go through the process of locating them and showing Lightroom where they are located. With 18,000 images, and many of them missing, it might just be easier to start a new catalog and organize things properly from the beginning in that catalog.

  • PERFORM_CONFLICT_TAB_TYPE  shortdump during data loading into ODS

    HAI
    Im trying to load the data into ODS in Production and QA system . But im getting the shortdump says that <b>PERFORM_CONFLICT_TAB_TYPE</b> .
    But in development system , data is loading into ODS.
    So please tell me wht i have to do
    i will assing the points
    rizwan

    here it is..
    Note 707986 - Writing in trans. InfoCubes: PERFORM_CONFLICT_TAB_TYPE
    Summary
    Symptom
    When data is written to a transactional InfoCube, the termination PERFORM_CONFLICT_TAB_TYPE occurs. The short dump lists the following reasons for the termination:
               ("X") The row types of the two tables are incompatible.
               ("X") The table keys of the two tables do not correspond.
    Other terms
    transactional InfoCube, SEM, BPS, BPS0, APO
    Reason and Prerequisites
    The error is caused by an intensified type check in the ABAP runtime environment.
    Solution
    Workaround for BW 3.0B (SP16-19), BW 3.1 (SP10-13)
               Apply the attached correction instructions.
    BW 3.0B
               Import Support Package 20 for 3.0B (BW3.0B Patch20 or SAPKW30B20) into your BW system. The Support Package is available oncenote 0647752 with the short text "SAPBWNews BW3.0B Support Package 20", which describes this Support Package in more detail, has been released for customers.
    BW 3.10 Content
               Import Support Package 14 for 3.10 (BW3. 10 Patch14 or SAPKW31014) into your BW system. The Support Package is available once note 0601051  with the short text "SAPBWNews BW 3.1 Content Support Package 14" has been released for customers.
    BW3.50
               Import Support Package 03 for 3.5 (BW3.50 Patch03 or SAPKW35003) into your BW system. The Support Package is available once note 0693363 with the short text "SAPBWNews BW 3.5 Support Package 03", which describes this Support Package in more detail, has been released for customers.
    The notes specified may already be available to provide advance information before the Support Package is released. However, in this case, the short text still contains the term "Preliminary version" in this case.
    Header Data
    Release Status: Released for Customer
    Released on: 18.02.2004  08:11:39
    Priority: Correction with medium priority
    Category: Program error
    Primary Component: BW-BEX-OT-DBIF Interface to Database
    Secondary Components: FIN-SEM-BPS Business Planning and Simulation
    Releases
    Software
    Component Release From
    Release To
    Release And
    subsequent
    SAP_BW 30 30B 30B  
    SAP_BW 310 310 310  
    SAP_BW 35 350 350  
    Support Packages
    Support
    Packages Release Package
    Name
    SAP_BW_VIRTUAL_COMP 30B SAPK-30B20INVCBWTECH
    Related Notes
    693363 - SAPBWNews BW SP03 NW'04 Stack 03 RIN
    647752 - SAPBWNews BW 3.0B Support Package 20
    601051 - SAPBWNews BW 3.1 Content Support Package 14
    Corrections Instructions
    Correction
    Instruction Valid
    from Valid
    to Software
    Component Ref.
    Correction Last
    Modifcation
    301776 30B 350 SAP_BW J19K013852 18.02.2004  08:03:33
    Attributes
    Attribute Value
    weitere Komponenten 0000031199
    German
    English
    Vishvesh

  • Extracting explanations from planning app and loading into Oracle table

    Hi All,
    I had a requirement where I had to extract data from a planning application through ODI 11g and load it into Oracle RDBMS.
    I used essbase as my source in technology (since planning data is stored on essbase side) and oracle as my target.
    Now the data is getting extracted from essbase side and is getting loaded into Oracle table through ODI.
    Now the client requires that they want to extract the explanations or text values also from planning application and load them into Oracle table.
    How this can be achieved?Is there a table on the sql side(since sql database is being used at back end for planning app) which stores the explanations,if yes please let me know which table it is.
    Kindly help me with this requirement.

    Hi,
    IKM SQL Control Append is perfect if you don't need incremental updates. If you need it, go for IKM Oracle Incremental Update (MERGE) or something like that.
    Regards,
    JeromeFr

Maybe you are looking for

  • Employee tax details are not showing up in Payslip...

    Hi Experts, in my project when we are  generating payslip Tax details are not showing up in that.I have checked with RT , the amount is correctly showing in RT.How can i resolve the problem.Please advice me. Regards, Sai.

  • SetRendered not working in ProcessRequest

    Dear All, I am trying to make rendering property of 1 MessageTextInput item as true in processrequest() (by default it is render -false) I am getting error as "You have encountered an unexpected error. Please contact the System Administrator for assi

  • Objective Setting & Appraisals - Status Flow

    Hello, I'm working on OSA module. When i configure the status flow, i have two options regarding the state of the appraisal after status change: - Save and Change to Display Mode - Save and Exit Appraisal Document. My requirement is that the appraisa

  • Waiting for file redraws after deleting bits of audio

    I keep looking in the prefs for something to alleviate this. This being, that if I have a big audio file loaded (hundreds to over a gig in size) and as I'm editing it (going along and deleting sections) most of the time I have to wait a while for the

  • ITunes library on Time Capsule disconnects

    using iTunes 10.4 on Lion (2009 MBP w 320GB HD) with 2GB Time Capsule wirelessly. Due to expanding iTunes library I currently have it setup on my Time Capsule. However, it seems that if I open iTunes before confirming that my Time Capsule is connecte