Reducing Database HIts

I have a reporting module. In that reporting module, I have to take data from different tables. Each field of the report get's it's data from its table. e.g. if in the report i have two fields 1. EmployeeId 2. CompanyId then I want to show employee name and company name in my report. For this i have to do a database hit to get the employee name from the Employee table (for the given employeeId) and then to get the company name i have to hit the DB again (to company table). Is there any way to reduce such Database hits as this has reduced the speed of my reports like anything. If there are 10000 rows then i have to do DB hit for these two columns 10000 times (that too at runtime). Thats bad right? What should i do to make my application faster?

I have a reporting module. In that reporting module,
I have to take data from different tables. Each field
of the report get's it's data from its table. e.g. if
in the report i have two fields 1. EmployeeId 2.
CompanyId then I want to show employee name and
company name in my report. For this i have to do a
database hit to get the employee name from the
Employee table (for the given employeeId) and then to
get the company name i have to hit the DB again (to
company table). Why can't you do one query and bring the entire data set for the report in one shot?
Is there any way to reduce such
Database hits as this has reduced the speed of my
reports like anything. If there are 10000 rows then i
have to do DB hit for these two columns 10000 times
(that too at runtime). Thats bad right? That's very bad, as you already know.
What should i do to make my application faster?Learn how to write SQL properly. One query should be able to bring the entire data set over in one network roundtrip.
%

Similar Messages

  • To fetch first name and last name of 1000 pernrs in one database hit

    Hi All,
      We have around 1000 pernr for whom we need to fetch the firstname and lastname from PA0002 .Since as of now we are using the HR_READ_INFOTYPE ,it will be called 1000 times ,which ultimatly increasing the runtime.
       Do any one know how to fetch the the firstname and lastname in one database hit.
    thanks
    CB
    Moderator message: please use code tags for code only, and do not cross-post!
    Edited by: Thomas Zloch on Aug 27, 2010 1:00 PM

    Hi ,
    it is better to go with the LDB PNP.
    specify the emp number's in the selection screen and in the program write this.
    get pernr.
    call the macro RP_PROVIDE_FROM_FRST P0002 SPACE PN-BEGDA PN-ENDDA .
    collect all the data into an internal table..
    Regards,
    Venkat Appikonda.

  • Reduce Cache Hit Ratio

    Hi,
    I am doing Benchmarking, for this pupose i need to minimize the CACHE HIT ratio.
    Currently my cache hit Ratio is 94% and now i need to reduce the Cache hit ratio.
    Oracle version: 11 g
    plz provide some steps to accomplish this
    Thanks

    user627648 wrote:
    Hi,
    I am doing Benchmarking, for this pupose i need to minimize the CACHE HIT ratio.
    Currently my cache hit Ratio is 94% and now i need to reduce the Cache hit ratio.
    Oracle version: 11 g
    plz provide some steps to accomplish this
    ThanksDid I read correctly that you want to "reduce" the hit ratio? This is the first time I am hearing that someon wants to reduce the hit ratio. Its like that I am getting an mileage of 40km but I want to go for 10km ;-)? Why would you want to do that? What's the reason?
    Do lots of Full Table Scan, remove your indexes, write bad queries , it will come down :-).
    Aman....

  • UCCE 10.0 EDMT will reduce database size ?

    Hi
    We are doing Tech Ref. upgrading CCE 8.0.3 server to CCE 10.0,
    Intial HDS DB size is 145GB
    after patch upgrade to 8.5, the HDS DB size is 125 GB,
    but after EDMT to ver 10.0 the migrated data is 79GB
    does something happened in migration towards the new version discarding some data
    In EDMT logs everything successfull
    In the 8.5 HDS Tables i can see the count is 366 items and for View folder is 443 Items
    In the 10.0 HDS tables i can see the count is 343 items and for View folder is 456 Items
    Why we see this difference, is it normal, how we got this much reduction in size
    Also, there is difference in HDS size between AW A and B too
    Thanks
    HariSivaji

    Hello Kanan,
    I have installed a Solution Manager 7.1 SP03 system, and I installed using Oracle not max DB.
    Right now the system is pretty pristine. The Database allocation is 146GB and of that allocate space I have used about 100GB
    My data base size is 146, I do not have a shortage of disk space, and the DB will autoextend should I fill it up.
    So my question would be of the 136GB what percentage is used? When you say it is growing, is it extending, or is it just
    using more of the allocated space?
    I am just now looking at a colleagues Solman 7.1 SP03 system. They have 176GB allocated to the database, but they have 104GB used. Their system has had a fair bit of use, but it is not a production system either, so it doesn't have a real world load on it.
    So typically reducing the size of the database, would entail unload the database to a flat file, resizing the database, and then reloading the database. Typically this si a DBA function.
    So I am not sure at what point a MaxDB databse would extend if you mean the physical size of the database is getting bigger. Still 136GB wouldn't be considered an unreasonable allocation as my system has 146Gb allocsated to the DB and my Colleague has
    176 GB allocated. ButI only used 100GB and they have used many scenarios and has maybe 20 users and only has used 4 GB more than me. So what is the size of the used space verus the allocation?
    You can remove various service sessions using report RDSMOPREDUCEDATA, but on a training system, I am not sure how much space that would free up. You can run a DB02 to see the allocation versus what is used, if you are not sure if 136Gb is the allocation or what space has been used. !f 136GB is the allocation, its actually low, and I suspect it is, since MaxDB tends to have a smaller foot print than Oracle. But if it is the space used and not the total allocation, then it seems very high for a training system..
    Hope this helps some.
    Regards,
    Paul

  • BW System copy from PRD to DEV - Reduce database size

    Hi All
    We are in the process of copying our existing Production BW system (3.5) to become a new Development & test system. We want to reduce the database size significantly from 1.4 Tb to about 150 - 200 Gb. We have deleted all cube and ODS data as well as much of the master data but still have much tablespace tied up in monitor entries (eg table RSMONMESS) and IDOCs. All of the notes I read say that we cannot delete from RSMONMESS in a "Prodcution" environment, however we are creating a new DEV system and really need to get rid of monitor entries anyway. So there are two questions:
    1. Any idea how we can clear up table RSMONMESS or delete unwanted monitor entries ?
    2. How can we delete IDOCs copied from our production system without archiving them ? (They are of no use in our new test system.)
    Please help.
    Many thanks in anticipation.
    Paul Sullivan
    Orica IT

    Hi Paul,
    See SAP Note 694895 - Performance and tables RSMON, RSDONE
    Note Language: English Version: 5 Validity: Valid Since 29.11.2005
    Summary
    Symptom
    Tables RSMON* (for example, table RSMONMESS) and tables RS*DONE (for
    example, table RSSELDONE) continuously increase in size in the BW system
    with each request that is created.
    Currently there is no option to reduce these tables.
    Do not delete any entries from these tables. If you did, this would have
    the following consequences:
    When you carry out the next check for the requests for which you have
    deleted records from these tables, the check will not run properly.
    The status of the relevant requests turns RED in the monitor, and also in
    all affected data targets that contain the request.
    The affected request and all subsequent requests are no longer visible in
    the reporting - all queries on the affected data targets then only display
    old data.
    Numerous dumps will occur in various situations.
    You will not be able to repair the errors caused by the deletion.
    More Terms
    RSMONMESS; RSSELDONE; Performance; RSMON; RSDONE;
    Cause and Prerequisites
    This is caused by a program error.
    Solution
    In the next BW release (BI in SAP NetWeaver 2004s - that is BW 7.0), you
    will be able to archive entries from the RSMON* and RS*DONE tables using a
    request archiving process that archives the administrative information for
    requests.
    After archiving, these tables are then considerably reduced in size.
    Until then, there is unfortunately no option for reducing the tables.
    Rgds,
    Colum

  • Reducing Database Call Techniques...query caching the only way?

    What's the most efficient way to reuse data that gets called on practically every page of an application?
    For instance, I have a module that handles all my gateways, sub pages and subgateways etc etc.  This will only change whenever a change is made to the page structure in the admin portion of the application.  It's really not necessary to hit the database everytime a page loads.  Is this a good instance to use query caching?  What are the pros, cons and alternatives?  I thought an alternative might be to store in a session, but that doesn't sound too ideal.
    Thanks!
    Paul

    What's the most efficient way to reuse data that gets called on practically every page of an application?
    That sounds like a question from the certification exam. The answer is to store the data in session or applicaton scope, depending on the circumstances. If the data depends on the user, then the answer is session. If the data persists from user to user, then it is application.
    admin portion of the application.
    Suggests users must log in. Otherwise you cannot distinguish admin from non-admin.
    This will only change whenever a change is made to the page structure in the admin portion of the application.
    Then I would go for storing the data in application scope, as the admin determines the value for everybody else. However, the session acope also has something to do with it. Since the changes are only going to occur in the admin portion, I would base everything on a variable, session.role.
    You cache the query by storing it directly in application scope within onApplicationStart in Application.cfc, like this:
    <cfquery name="application.myQueryName">
    </cfquery>
    The best place for the following code is within onSessionStart in Application.cfc.
    <!--- It is assumed here that login has already occurred. Your code checks whether
    session.role is Admin. If so, make the changes. --->
    <cfif session.role is 'admin'>
    <!--- Make changes to the data in application.myQueryName, otherwise do nothing --->
    </cfif>
    Added edit: On second thought, the best place for setting the application variable is in onApplicationStart.

  • PLease help: High Volume database hitting limits

    Hi,
    I am using Berkeley DB to store real time ticks for market data. To give you an idea of the amount of data...
    - I get roughly 15000 ticks/sec on an average
    - by the end of the day the database file grows upto 50GB.
    My system configuration is...
    $ uname -a
    Linux gritsbox.rsi.com 2.6.18-92.1.17.el5 #1 SMP Wed Oct 22 04:19:38 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux
    - Quad core 1.6 Ghz
    - 4GB RAM
    - I have set the Berkeley db cache to 2GB and I am using the BTREE access method.
    I started out using one database to store everything and it works well for 3/4th of the trading day. However, Db:puts increasingly start to take longer and towards the end of the day the delay is so high that I start loosing ticks. I figured that this could be because of BTREE trying to rebalance a heavy tree on every PUT (is this correct?)
    I then decided to open 10000 (yeah!!!) databases in the environment, 1 for each symbol(ticker) that i am subscribed to. my idea behind this was that balancing a much smaller tree would be faster for Berkeley DB. However when i run the code, i get a "Db::open: Cannot allocate memory" error after opening just 300 something databases.
    Any suggestions would be much appreciated!
    thanks
    Nishant

    Hi Bogdan, thanks for your reply. below is the response to your questions...
    1) I havent written code to sort the records. I assume BTREE is storing them sorted.
    2) This is a multi-threaded application
    3) I have to access the database on demand. Other processes would set the curser to the last value in each database and return its value.
    4) the 50GB file is created in my old approach where I was using one database to store everything. today I was trying to open 1 database for each symbol/ticker (so around 10000 databases) that resulted in the "Could not allocate memory" error. In the old approach a new 50GB file was created everyday.
    5) This is the only code that writes to the database...
         //Now write the values to DB
         stringstream ss_key_;
         ss_key_ << bsid_ << time_;
         string s_val = "{" + ss_val_.str() + "\"d\":\"d\"}";     //adding a dummy key to take care of the last comma
         Dbt key((char *)ss_key_.str().c_str(), 25);     //BSID + Time
         Dbt value((char *)s_val.c_str(), s_val.length()+1);
         try {
              dbs_[1]->put(NULL, &key, &value, 0);
    } catch (DbException &e) {
    cout << "Error: " << e.what() << std::endl;
    6) u_int32_t envFlags      = DB_CREATE | DB_INIT_MPOOL | DB_INIT_TXN | DB_INIT_LOCK | DB_THREAD;
    u_int32_t dbFlags      = DB_CREATE | DB_THREAD;
    7) No, I am not managing the cache.
    I appreciate you taking the time to help me out.
    thanks
    Nishant

  • Reduce database header?

    Hi all, I'm making a new layout but my header has got out of hand, it won't resize smaller. I get the double-arrow cursor when I mouse over the boundary between body and header but it'll only move in one direction, down the page - what's the story? Many thanks for any help, Duncan

    Hello
    I assumes that your header contains a forgotten object.
    I was able to reproduce the described behaviour with a text block containing several returns.
    Nothing is visible but the header limit can't be moved to the top.
    Try to select the entire content of the header so the culprit will appear.
    Yvan KOENIG (from FRANCE mardi 8 mai 2007 17:15:58)

  • Session in Flex

    You can Use the following class as a session in flex..
    [Bindable]
    private var session:Session = Session.getInstance();
    How to insert data to session
    session.setAttribute('username','sanka')
    How to retrive data from session
    var userName:String=session.getAttribute('username');
    http://dl.dropbox.com/u/7375335/Session.as
    package model
        import flash.utils.Dictionary;
        [Bindable]
         * @author Sanka Senavirathna
         *<p>
         *    Conforms to Singleton Design Pattern.
         *     </p>
         *    <p>
         *        Session.getInstance().getAttribute(key)
         *        Session.getInstance().setAttribute(key,value)
         * Code example for client
         [Bindable]
         private var session:Session = Session.getInstance();
         session.getAttribute('name')
         *    </p>
        public class Session
            private static var instance:Session = new Session();
            private var dic:Dictionary=new Dictionary(); // keep user session
            public function Session()
                if (instance != null) { throw new Error('Cannot create a new instance.  Must use Session.getInstance().') }
             * This Session is a Dictionary that keep key value pairs.Key can be object and value can be object.
             * [Bindable]
             * private var session:Session = Session.getInstance();
             * @langversion ActionScript 3.0
             * @playerversion Flash 8
             * @see getAttribute
             * @see setAttribute
             * @author Sanka Senavirathna
            public static function getInstance():Session
                return instance;
             * [Bindable]
             * private var session:Session = Session.getInstance();
             * Session.getInstance().setAttribute(key,value)
             * <p>example
             * session.setAttribute('username','sanka')
             *</p>
             * @langversion ActionScript 3.0
             * @playerversion Flash 8
             * @param key Describe key here.
             * @param value Describe value here.
             * @see getAttribute
             * @author Sanka Senavirathna
            public function setAttribute(key:*,value:*):void
                dic[key]=value;
             * [Bindable]
             * private var session:Session = Session.getInstance();
             * Session.getInstance().getAttribute(key)
             * <p>example
             * var userName:String=Session.getInstance().getAttribute('username');
             *</p>
             * @langversion ActionScript 3.0
             * @playerversion Flash 8
             * @param param1 Describe key here.
             * @see setAttribute
             * @author Sanka Senavirathna
            public function getAttribute(key:*):*
                return dic[key];

    I think This will help you to keep your shaired data in single place.this will optimize your program by
    reducing database hits,
    reducing Server hits
    Onece you retrive the data then put it into session (flex) and retrive it when you need..For example username in system.when user logged successfully
    put the user object in to session and hide the login component.then pop-up your other component.But you can access the user data from session by using
    var user:User=Session.getInstance().getAttribute('user'); something like that.
    Hope this will help you.
    enjoy
    happy cording

  • How to create composit key ??

    Hello Friends!
    i need your help. I have to create composit key.
    I have 2 tables. Table 1 contains 9 colums and table 2 contains 3 colums. I am writing down just related colums of table1
    Table1: SID (pk) , Bytes_convert_test , Operator , Value , offset_test , Bytes_convert_jump , offset_jump
    Table2: SID (fk) , value , type , ID (pk)
    in table2 type represents the 2 colums in table 1 (Bytes_convert_test and Bytes_convert_jump). Type colum could be anyone of them may be for one record it is Bytes_convert_test or for another record bytes_convert_jump. values
    Value colum in table2 having different values or may be same values for both records. but my task is that i have to store record like if the type in table2 is bytes_to_convert then the values colum stores values for bytes to convert record. In short i have to separate values according to type. I am not sure that i need SID as fk in table2.
    Remember value colum in table1 containing different entries.
    Waiting for quick reply.
    BR,
    Zeeshan
    Edited by: user10899712 on 2009-apr-20 03:42

    Hello Zeeshan,
    You are saying you want to use Type and Value fields in Table2 as foreign keys to Table1 and based on logical operations,Type maps to Bytes_convert_test and/or Bytes_convert_jump in Table1. This means that there is no way to do a straight join between Table1 and Table2 that I can see. Unless you have some simple logic you can add to a query to get the joins to work consistently, I would strongly recommend you use a simple foreign key in Table2.
    Using a simple foreign key that goes to Table1's primary key not only makes the table joins easier, it also will reduce database hits - since now the primary key can be used when traversing relationships to search the cache instead of having to always go to the database.
    Best Regards,
    Chris

  • BW queries

    Hi Guys
    Can any body help me with below set of queries. Basically i need answers to this in a simplied manner such that it is understood by a ppl who is totally new to BW.
    a) What is Memory management system ( So that You could reduce Database hit at loading time)
    b) What kind of source file it support
    c) What is Architecture of BW
    d) What kind of transformation are available in BW
    e) What is rating of BW in ETL world
    Thanks & Regards,
    Rohini

    Hi rohini!
    1))   SAP memoery management system ..the followind document explains this http://www50.sap.com/linux/pdf/mm630_e.pdf
    2)it supports CSV , ASCII file in source system
    3)BW architecture  is a bit complex so i am giving a link for your understanding . http://www.thespot4sap.com/Articles/SAP_BW_Architecture.asp
    http://media.wiley.com/product_data/excerpt/11/04712197/0471219711-1.pdf
    4)basically  BW provides 4 types of transfermation posibility in transfer   copying from datasource , Constant,formula and Abaproutine (you can use various conversion routines in datasources while loading like Alfa and so on..)
    5) i am not sure....??
    6) to SAP systems,
       Non SAP external SYSTEms with 3rd party with extranction tools
       Databases with DB connect
       Flatfiles
       XML
      ALL relational and multi-dimensional data sources using UD connect in BW3.5
    Hope it helps
    with regards
    ashwin
    Message was edited by: Ashwin Kumar Gadi

  • OIM 10g database reducing space (~120+GB)

    Looking for pointers for reducing database size OTN/Oracle Documentation/Metalink etc ...
    Environment
    OIM: 9.1.0.2
    Database: Oracle 10g Standard Edition
    Database machine: Unix server

    This will help
    Could Not execute auto Check for DISPLAY
    Check if the DISPLAY variable is set - Failed

  • Does cursor hit database everytime ??

    Hi seniors,
    I am little confused with the concept of the cursor. I juts want to know is cursor has anything to deal with the database hit.
    Explanation :
    Say for example I have a cursor which is returning the 1000 rows from multiple table then that recordset will stored in some named sql area called cursor.
    Now my question is when I loop the cursor will it get the actual data directly from the cursor OR It will just get an address location from the cursor to the actual database table or something.
    If possible please help me to clear this doubt.
    the reason being I have created one package which will move or drop the table with its all objects from all the schema available on the database server and it makes use of lot of cursor which are based on system views like all_table,all_trigger etc
    Thanks in advance.

    Example is :
    PROCEDURE move_table_pd
                            cTable                  in      varchar2,
                            cFromSchema             in      varchar2 := 'STI_COUNTRY_USA',
                            cToSchema               in      varchar2 := 'STI_COMMON',
                            nVerbosity              in      number   := 0,
                            nExecuteImmediate       in      number   := 1
            IS
            BEGIN
                    if ((cTable is not null) AND (cFromSchema is not null) AND (cToSchema is not null)) then
                            if (nVerbosity <> 0) then
                                    print_start_time_pd;
                            end if;
                            cTableName              :=      upper(cTable);
                            cSourceSchema           :=      upper(cFromSchema);
                            cDestinationSchema      :=      upper(cToSchema);
                            DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',false);
                            DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',false);
                            -- Step 1 : Create the table at destination schema if needed.
                            create_table_pd(cTableName,cSourceSchema,cDestinationSchema);
                            -- Step 2 : Create Sequences and Triggers for the table at destination schema if needed.
                            create_trigger_and_sequence_pd(cTableName,cSourceSchema,cDestinationSchema);
                            -- Step 3 : Create Indexes for the table at destination schema if needed and then drop the rest all indexes if any.
                            create_index_pd(cTableName,cSourceSchema,cDestinationSchema);
                            -- step 4 : Insert the data at destination schema table
                            populateTable_pd(cTableName,cSourceSchema,cDestinationSchema);
                            -- The last step is to Drop the table and we need to really take care here.
                            -- step 5 : Drop the table from all other schema except destination schema.
                            drop_table_pd(cTableName,cDestinationSchema);
                            -- Again create the public synonyms on table
                            create_and_grant_synonym_pd(cTableName,cTableName);
                            -- Step 6 : Now execute all the statements from the statement array.
                            executeStatement_pd(nVerbosity,nExecuteImmediate,cDestinationSchema);
                             if (nVerbosity <> 0) then
                                    print_end_time_pd;
                            end if;
                    end if;
            EXCEPTION
                    WHEN OTHERS THEN
                            null;
            END move_table_pd;
            PROCEDURE create_table_pd
                    cTableName in ALL_TABLES.TABLE_NAME%TYPE,
                    cSourceSchema in ALL_TABLES.OWNER%TYPE,
                    cDestinationSchema in ALL_TABLES.OWNER%TYPE
            IS
            BEGIN
                    -- Step 1 : Create or drop the table depending on the the tables schema.
                    if ((cTableName is not null) AND (cSourceSchema is not null))   then
                            FOR REC_TABLE IN cur_get_create_table_detail(cTableName,cSourceSchema)
                            LOOP
                            BEGIN
                                    if      (REC_TABLE.OWNER        =       cSourceSchema)  then
                                            --Get the DDL of the table
                                            cSqlStatement   :=      getObjectDDL_fd('TABLE',cTableName,REC_TABLE.OWNER);
                                            -- As This sql statement is with the source table schma name we need to replace that with the destination schema
                                            -- And then we should create the table.
                                            cSqlStatement   :=      FindAndReplace_fd(cSqlStatement,cSourceSchema,cDestinationSchema);
                                            -- Now first check whether the same table exist at destination schema or not if yes no need to create the same else create.
                                            nObjectFound    :=      isTableAlreadyExist_fd(cTableName,cDestinationSchema);
                                            if (nObjectFound = 0)   then
                                                    -- Now we are assured that the same table does not present at cDestinationSchema
                                                    -- So now we can push the statement to be executed in statements array.
                                                    pushStatement_pd(cSqlStatement);
                                                    cSqlStatement   :=      null;
                                            end if;
                                    end if;
                            EXCEPTION
                                    WHEN OTHERS THEN
                                            null;
                            END;
                            END LOOP;
                    end if;
            EXCEPTION
                    WHEN OTHERS THEN
                            null;
            END create_table_pd;
            PROCEDURE create_trigger_and_sequence_pd
                    cTableName              in      ALL_TABLES.TABLE_NAME%TYPE,
                    cSourceSchema           in      ALL_TRIGGERS.OWNER%TYPE,
                    cDestinationSchema      in      ALL_TRIGGERS.OWNER%TYPE
            IS
                    -- Procedure local variables.
                    -- for triggers details
                    cTriggerSchema        ALL_TRIGGERS.owner%TYPE;
                    cDescription          ALL_TRIGGERS.description%TYPE;
                    cTriggerBody          ALL_TRIGGERS.trigger_body%TYPE;
                    cTriggerName          ALL_TRIGGERS.trigger_name%TYPE;
                    -- for sequence details
                    cSequenceOwner          ALL_SEQUENCES.sequence_owner%TYPE       ;
                    cSequenceName           ALL_SEQUENCES.sequence_name%TYPE        ;
                    -- Check Trigger count on table
                    cTriggerCount   number  :=0;
            BEGIN
                    -- Step 2 : Create the sequences, triggers and there synonyms and grants  on the the tables schema.
                    if ((cTableName is not null) AND (cSourceSchema is not null) and (cDestinationSchema is not null))   then
                            FOR REC_TRIGGER IN cur_get_create_trigger_detail(cTableName,cSourceSchema)
                            LOOP
                            BEGIN
                                    cTriggerSchema  :=      REC_TRIGGER.owner       ;
                                    cDescription    :=      REC_TRIGGER.description ;
                                    cTriggerBody    :=      REC_TRIGGER.trigger_body;
                                    cTriggerName    :=      REC_TRIGGER.trigger_name;
                                    if      (cTriggerSchema =       cSourceSchema)  then
                                                    -- check the sequences for that trigger if any then create the same
                                                    FOR REC_SEQUENCE IN cur_get_create_sequence_detail(cTriggerName,cSourceSchema) LOOP
                                                            cSequenceOwner  :=      REC_SEQUENCE.sequence_owner;
                                                            cSequenceName   :=      REC_SEQUENCE.sequence_name;
                                                    BEGIN
                                                            if ((cSequenceName is not null) AND (cSequenceOwner     =       cSourceSchema)) then
                                                            --Get the DDL of the sequence
                                                            cSqlStatement   :=      getObjectDDL_fd('SEQUENCE',cSequenceName,cSequenceOwner);
                                                            -- As This sql statement is with the source sequence schema name we need to replace that with the destination schema
                                                            -- And then we should create the sequence.
                                                            cSqlStatement   :=      FindAndReplace_fd(cSqlStatement,cSourceSchema,cDestinationSchema);
                                                            -- Now first check whether the same sequence exist at destination schema or not if yes no need to create the same else create.
                                                            nObjectFound    :=      isSequenceAlreadyExist_fd(cSequenceName,cDestinationSchema);
                                                                    if (nObjectFound = 0)   then
                                                                            -- Now we are assured that the same sequence does not present at cDestinationSchema
                                                                            -- So now we can push the statement to be executed in statements array.
                                                                            pushStatement_pd(cSqlStatement);
                                                                            cSqlStatement   :=      null;
                                                                            -- First drop synonym and then create
                                                                            drop_synonym_pd(cSequenceName,cDestinationSchema);
                                                                            -- Create the public synonym for sequence and give the grants to the sequence
                                                                            -- As we know this sequence is the part of the trigger so we do not need
                                                                            -- to create the synoyms and grants for the same
                                                                            --create_and_grant_synonym_pd(cSequenceName,cSequenceName);
                                                                            -- And now drop this existing sequences
                                                                            drop_sequence_pd(cSequenceName,cSourceSchema,cDestinationSchema);
                                                                    end if;
                                                            end if;
                                                    EXCEPTION
                                                            WHEN OTHERS THEN
                                                                    null;
                                                    END;
                                                    END LOOP;
                                            -- Now first check whether the same table exist at destination schema or not if yes no need to create the same else create.
                                            nObjectFound    :=      isTriggerAlreadyExist_fd(cTriggerName,cDestinationSchema);
                                            if (nObjectFound = 0)   then
                                                    -- Now we are assured that the same table does not present at cDestinationSchema
                                                    -- So now we can push the statement to be executed in statements array.
                                                    -- Rather we can create the trigger using some different way as show below
                                                    -- Create trigger using different way
                                                    cDescription    :=      FindAndReplace_fd(UPPER(cDescription),cTableName,cDestinationSchema||'.'||cTableName);
                                                    cSqlStatement     :='CREATE OR REPLACE TRIGGER '||cDescription||UPPER(cTriggerBody);
                                                    pushStatement_pd(cSqlStatement);
                                                    cSqlStatement   :=      null;
                                            end if;
                                            -- Now drop the existing synonyms on triggers if any
                                            -- As we do not create the synonyms for triggers then we dont have to drop the same
                                            --drop_synonym_pd(cTriggerName,cDestinationSchema);
                                            -- Now drop the existing triggers from other schema
                                            -- We do not need to drop the triggers manually as it gets dropped along with the table.
                                            --drop_trigger_pd(cTriggerName,cSourceSchema,cDestinationSchema);
                                    end if;
                            EXCEPTION
                                    WHEN OTHERS THEN
                                            null;
                            END;
                            END LOOP;
                    end if;
            EXCEPTION
                    WHEN OTHERS THEN
                            null;
            END create_trigger_and_sequence_pd;
            PROCEDURE       create_index_pd
                    cTableName              in      ALL_INDEXES.TABLE_NAME%TYPE,
                    cSourceSchema           in      ALL_INDEXES.OWNER%TYPE,
                    cDestinationSchema      in      ALL_INDEXES.OWNER%TYPE
            IS
            BEGIN
            --cur_get_create_index_detail   index_name
                    if((cTableName is not null) AND (cSourceSchema is not null) AND (cDestinationSchema is not null) )      then
                            FOR REC_CREATE_INDEX IN cur_get_create_index_detail(cTableName,cSourceSchema)
                            LOOP
                                    BEGIN
                                            if      ((REC_CREATE_INDEX.index_name IS NOT NULL ) AND (REC_CREATE_INDEX.owner =       cSourceSchema))  then
                                                    --Get the DDL of the Index
                                                    cSqlStatement   :=      getObjectDDL_fd('INDEX',REC_CREATE_INDEX.index_name,REC_CREATE_INDEX.owner);
                                                    -- As This sql statement is with the source index schema name
                                                    -- we need to replace that with the destination schema
                                                    -- And then we should create the sequence.
                                                    cSqlStatement   :=      FindAndReplace_fd(cSqlStatement,cSourceSchema,cDestinationSchema);
                                                     -- Now first check whether the same index exist at destination schema or not
                                                     -- if yes no need to create the same else create.
                                                     nObjectFound    :=      isIndexAlreadyExist_fd(REC_CREATE_INDEX.index_name,cDestinationSchema);
                                                     if (nObjectFound = 0)   then
                                                            -- Now we are assured that the same index does not present at cDestinationSchema
                                                            -- So now we can push the statement to be executed in statements array.
                                                            pushStatement_pd(cSqlStatement);
                                                            cSqlStatement   :=      null;
                                                            -- Now as we have created a statement to create the index
                                                            -- So we need to check its existing Synonmyms and drop the same if exist
                                                            drop_synonym_pd(REC_CREATE_INDEX.index_name,cDestinationSchema);
                                                            -- Guess for Indexes we do not need to creat public synonym and no need to give grant to index
                                                            --create_and_grant_synonym_pd(REC_CREATE_INDEX.index_name,REC_CREATE_INDEX.index_name);
                                                             -- And now drop this existing indexes if any
                                                             --  We do not need to drop the indexes manually as it gets dropped along with the table.
                                                             -- drop_index_pd(REC_CREATE_INDEX.index_name,cSourceSchema,cDestinationSchema);
                                                     end if;
                                            end if;
                                    EXCEPTION
                                            WHEN OTHERS THEN
                                                    null;
                                    END;
                            END LOOP;
                    end if;
            EXCEPTION
                    WHEN OTHERS THEN
                            null;
            END create_index_pd;
            PROCEDURE       populateTable_pd
                    cTableName              in      ALL_TABLES.TABLE_NAME%TYPE,
                    cSourceSchema           in      ALL_TABLES.OWNER%TYPE,
                    cDestinationSchema      in      ALL_TABLES.OWNER%TYPE
            IS
            BEGIN
                    if((cTableName is not null) AND (cSourceSchema is not null) AND (cDestinationSchema is not null) )      then
                            nObjectFound    :=      isTableAlreadyExist_fd(cTableName,cSourceSchema);
                            if (nObjectFound <> 0) then
                                    cSqlStatement   :=      'INSERT INTO ' ||cDestinationSchema||'.'|| cTableName||
                                                            ' SELECT * FROM '||cSourceSchema||'.'||cTableName;
                                    pushStatement_pd(cSqlStatement);
                                    cSqlStatement   :=      null;
                            end if;
                    end if;
            EXCEPTION
                    WHEN OTHERS THEN
                            null;
            END populateTable_pd;
            PROCEDURE        executeStatement_pd
                    nVerbosity              in      number :=0,
                    nExecuteImmediate       in      number  := 1,
                    cExecuteOn              in      varchar2:=      'STI_COMMON'
            IS
            nTotalRecords   number  :=0;
            l_strsql LONG;
            cStmt varchar2(200);
            cError varchar2(300);
            cCurrentSchema  varchar2(50);
            BEGIN
                    if (aAllStatement is not null)  then
                            cCurrentSchema  :=      getCurrentSchema_fd;
                            if (nExecuteImmediate <> 0) then
                                    --altersession_pd;
                                    DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',false);
                                    DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',false);
                            end if;
                            nTotalRecords   :=      aAllStatement.COUNT;
                            if (nVerbosity <> 0)    then
                                    DBMS_OUTPUT.PUT_LINE('TOTAL STATEMENTS TO BE EXECUTED :'|| nTotalRecords);
                                    DBMS_OUTPUT.PUT_LINE('---------------------- EXECUTION BEGINS HERE -----------------');
                            end if;
                            --FOR cntr      in      1..nTotalRecords
                            FOR cntr      in      aAllStatement.FIRST..aAllStatement.LAST
                            LOOP
                                    BEGIN
                                            if aAllStatement.EXISTS(cntr) then
                                                    cSqlStatement :=        aAllStatement(cntr);
                                                    l_strsql := dbms_lob.SUBSTR( cSqlStatement, 32765, 1 );
                                                    if (nVerbosity <> 0)    then
                                                            DBMS_OUTPUT.PUT_LINE(cntr||' Now executing : '||cSqlStatement );
                                                    end if;
                                                    if (nExecuteImmediate <> 0)     then
                                                            if (l_strsql is not null)       then
                                                                    BEGIN
                                                                    --EXECUTE IMMEDIATE     cSqlStatement;
                                                                    EXECUTE IMMEDIATE       l_strsql;
                                                                    INSERT INTO gen_sql_log (t_sql_log_time,c_os_user,c_host,c_Server_Host,c_sql) VALUES (LOCALTIMESTAMP,sys_context('USERENV', 'OS_USER'),sys_context('USERENV', 'HOST'),sys_context('USERENV', 'SERVER_HOST'),l_strsql);
                                                                    EXCEPTION
                                                                            WHEN OTHERS THEN
                                                                                    cError:=substr(SQLERRM,1,300);
                                                                                    DBMS_OUTPUT.PUT_LINE('-------------<< ERROR >>-------------');
                                                                                    DBMS_OUTPUT.PUT_LINE('Error while running : '|| l_strsql);
                                                                                    DBMS_OUTPUT.PUT_LINE('');
                                                                                    DBMS_OUTPUT.PUT_LINE('Error Occured : '|| cError);
                                                                                    DBMS_OUTPUT.PUT_LINE('-------------<< END OF ERROR >>-------------');
                                                                    END;
                                                            end if;
                                                    end if;
                                            end if;
                                    EXCEPTION
                                            WHEN OTHERS THEN
                                                    null;
                                    END;
                            END LOOP;
                            aAllStatement.DELETE(aAllStatement.FIRST,aAllStatement.LAST);
                            --aAllStatement.TRIM(nTotalRecords);
                            nStatementCounter       :=0;
                            -- Move back to previous session
                            if (nExecuteImmediate <> 0) then
                                    --altersession_pd(cCurrentSchema);
                                    cCurrentSchema  :=      getCurrentSchema_fd;
                                    if (nVerbosity <> 0)    then
                                            DBMS_OUTPUT.PUT_LINE(' CURRENT SCHEMA : '|| cCurrentSchema);
                                     end if;
                            end if;
                            commit;
                    end if;
            EXCEPTION
                    WHEN OTHERS THEN
                    null;
            END executeStatement_pd;

  • Error: Invalid interruption of a database selection

    Hi,
    When i execute the below code, the output is displayed without any problem.
    <b>tables lfa1.
    select * from lfa1 order by lifnr.
    write / lfa1-lifnr.
    endselect.</b>
    But however when i debug this piece of code, i get an error as below.
    ShrtText - Invalid interruption of a database selection
    Runtime Errors - DBIF_RSQL_INVALID_CURSOR
    Exceptn - CX_SY_OPEN_SQL_DB
    Can anybody help me with the reason for the occurence of this problem.

    Hi Vijay,
    Just go through the following link:
    Re: DBIF_RSQL_INVALID_CURSOR dump during debugging
    As Sooness pointed out, SELECT-ENDSELECT dumps only the first time. Try re-executing it and it will work fine.
    Meanwhile, it is always better to use INTO TABLE OF instead of SELECT-ENDSELECT, as this minimizes database hits.
    Regards
    Anil Madhavan

  • LMS 4.2.3 Database Backup Issue

    Hi All,
    I was unable to take database backup of  the LMS. Its failing with the below error.
    Error :Backup Failed: Error(404) Insufficient space on destination
    Where on destination drive i have space of 150GB Free space. but still LMS throwing the error.
    Backup is failing. Any suggestion how much the size of the database. how can we check?/
    Regards,
    Channa

    Hi VInod/Martin,
    The backup is failing
    Backup to 'E:/LMSDBBKP_Backups/Today's_new_backup' started at: [Fri Oct 17 18:00:00 2014]
    [Fri Oct 17 18:02:10 2014]  ERROR(405): Insufficient disk space on backup destination volume.
    Available Space is 188090140Kb and required space for backup is 226175301Kb.
    [Fri Oct 17 18:02:10 2014]  Backup failed: 2014/10/17 18:02:20
    The DB file size very huge.i wanted to reduce database file size by performance data purge.but now am unable to purge performance data from the purge settings.. its giving an error "Cannot connect to JRM, Check whether JRM is up and running".
    i can see the jrm is up and running normally.
    is there any way i can purge the performance data from the backend ??
    Regards,
    Channa

Maybe you are looking for

  • BC4J, Triggers, and Clobs

    I'm on a J2EE web-application that is using the BC4J framework. At one point, we're taking some information from a form and saving that info into a table (composed of a couple varchar columns and a couple of CLOB columns). There are two triggers asso

  • Mass Maintenance of Functional Locations !!!

    Hi, Mass Maintenance of Functional Locations in ECC6 EHP3.Any pointers?Dont want to go thru the LSMW path. Precise and specific information required. Thanks.

  • Dashboard Refference?

    Is there a Doshboard development reference akin to Yahoo Widget Engine's Widget Reference 3.0 PDF? http://widgets.yahoo.com/workshop/ I have been developing on YWE since it began as Konfabulator. Now that I have Mac 10.4, I am interested in creating

  • I have an older Thinkpad

    I have an older Thinkpad X201 tablet with WWAN capability - it has a Qualcomm Gobi 2000 module installed inside as well as a Sim Card slot (no sim anymore) and was once configured with Verizon wireless circa 2012 or so (I have been told the Qualcomm

  • Why email stopped?

    I haven't received any email for days. Usually get about 20 per day. Someth8ng is wrong. And I must check my email for a response to this post? But I don't seem to be receiving email!!!! HELP!