How the storage will be allocated on Undo tablespace made of datafiles?

I have a UNDO tablespace with two data files. Wherein one data file with Auto Extend as Yes and other as No.
In the case of multiple datafiles in UNDO tablespace,
1. Will the storage be allocated of same size on each datafile ?
2. What will be the case when the datafile with Auto Extent as NO reached it's max size?
3. Will the storage be allocated proportion to the datafile size in each file?

But when I use the following query, its not getting fired. Its taking "INDEX FULL SCAN"I would have thought that the later part of the above statement contradicts with the earlier part of the statement.
its not getting firedcontradicts this statement:
Its taking "INDEX FULL SCAN"Since the birth_dt is not the leading column of your index ...
Is it doing a INDEX FULL SCAN on the GPM_PERSONPROFILEDEMO_DOB2_IDX index?
if you have queries that involve profile_update_dt + birth_dt and some others that involve only birth_dt, then why not have the index in birth_dt + profile_update_dt instead?

Similar Messages

  • In Hyperion essbase how the project will get allocated?

    Hi All:
    I am new to the Hyperion; I have an doubt how hyperion project will devide into the modules? kindly take an example any finanical large project and tell me.
    In normally 1. Functional designs, Technical designs, Build, testing. in normal any project.
    But hyperion how it will be? could you pls tell me?
    Also what kind of daily activites we get in Hyperion Essbase?

    Please check out this thread => How to calculate cache setting from Essbase application.
    The ODTUG whitepaper is indeed quite good.
    Thank you,
    Todd Rebner

  • I am planning on buying an iPad 4 16GB, So I wanted to ask that after taking out the storage consumed by the system and system apps, How much storage will I be left with?

    I am planning on buying an iPad 4 16GB, So I wanted to ask that after taking out the storage consumed by the system and system apps, How much storage will I be left with?

    16GB vs 32GB vs 64GB: Which new iPad storage capacity should you get?
    http://www.imore.com/2012/03/08/16gb-32gb-64gb-ipad-capacity/
    How much content will fit on my iPod or iPhone?
    http://support.apple.com/kb/HT1867
     Cheers, Tom

  • How the system will caluclate price in billing doucment

    hi gurus
    In billing doucment.( Invoice) how the price will come. 
    2)  What is the use of document pricing procedure in billing document ( vofa) controling fields.  Where do we use this document pricing procedure In which scenario this (document pricing procedure) will be use ful.
    3)  can we see pricing in delivery.
    Thanks in advance

    HI,
    A sales document belongs to a sales area. PP is determined in SD on customer PP + Doc .PP and sales area. Docpp is field in sales doc or billing type. PP of sales order and invoice is same if doc. pp is same. The same gets copied in invoice. But, u can have diff pp for billing doc also.
    Generally, in delivery u can have frt or pkg charges.Pricing is delivery is not recommended.
    Hope it clarifies your doubts.
    Thanks,
    Pramod

  • How the pricing will be carried out in intercompany billing

    hi ginius
    my question is how the system will carry out pricing.
    2) how many pricing procedure should we maintain
    3) how customer will get price how the invoice will goes
    4) what are the configuration settings for intercompany biilling
    thanks in adavance

    Dear nag nageshwar,
    A small search on SDN SD forum will help you to answer your query. 
    I will suggest you to visit  http://sap-img.com/sap-sd.htm. It will give you the overview of SAP SD module.
    Moreover there is a separate section of FAQs with answers which will help you in great deal.
    Hope this helps you.
    Do award points if you found them useful.
    Regards,
    Rakesh
    P.S. you can send me a mail at my mail id [email protected] for any specific details

  • Hi,does anyone know how the Iwatch will play music when it is not near a bluetooth device as it is stated in the feature setting..Icloud streaming,internal hard drive?

    Hi,does anyone know how the Iwatch will play music when it is not near a bluetooth device as it is stated in the feature setting..Icloud streaming,internal hard drive?

    I did not hear them state the feature requirements directly in the streaming even but there are statements on the Apple website that clearly say that an iPhone is necessary.  I'm guessing that other than Bluetooth for pairing with the iPhone, it's just not technically possible to fit any other radios such as WiFi or LTE into that form factor or battery capacity.  I don't even remember them mentioning if it had a speaker or could pair directly with bluetooth headphones.
    While I'm hoping that the watch has a reasonable stand-alone feature set, they didn't make it clear what exactly it can and can't do without an iPhone nearby.  I also hope that they communicate this better in the coming days.

  • HT1766 I cannot open my phone as the screen will not work. I have made an appointment with a technician tomorrow but is there anyway of backing my phone up to Icloud so that I do not lose any of photos etc etc?

    I cannot open my phone as the screen will not work. I have made an appointment with a technician tomorrow but is there anyway of backing my phone up to Icloud so that I do not lose any of photos etc etc?

    PLug it into yrou computer to sync/back it up.
    Is it set to back up to iCloud now?
    Or do you have wifi backup enabled in iTunes?
    Plug it in while on wifi and it will automatically back up.

  • How the Queue will be created to process the  Multiple IDOC's into  1 Queue

    Hi Sap All.
    when i went into the Transaction code we14 and when the press F4 for search help in the field  Queue Name it displays me the several queues with the Port referring to B0018(refers to PI Box) ,which means that the IDOC's will be sending to PI.
    here my Question is how the Queues get created ,what is the Transaction code for creating the Queues in SAP-R/3 and how we we setup the SAP-R/3 Box in order to send multiple IDOC's into one queue.overall my question is how we wrok on Queues ?
    could any body help me in driving me to the best way.
    regards.
    Seetharam.

    If you have the drivers CD or the LV 6 installation CD, then you could try installing just the traditional DAQ drivers.  I will do a search if I can find them...Good news, I found the drivers that you may need.  Here is the link to download.
    After you install it, look in the vi.lib folder for the DAQ folder and see if you have a dozen
    .llb files. That is where you will find your missing vi's. You may also check into the LV2009 folder to check to
    see if the driver got installed there. I think the driver only gets
    installed for a single version and it might default to the latest at that time (I did not check which one it was).
    In any case, look for the missing VIs in the \National Instruments\LabVIEW 2009\vi.lib\Daq\ folder.  The VIs are contained in the .llb files so you
    can't just search for them through Windows.  Look in the AI.llb file using LabVIEW.
    Here are issues that you may encounter.  The link is to the DAQmx driver (I thought it was for traditional DAQ )
    http://forums.ni.com/t5/LabVIEW/Error-10461-10008-​DIO-Port-Config-Legacy-lvdaq-dll/m-p/947104
    Someone with a similar problem.  It won't help you, but kinda explains what you are going through..
    http://forums.ni.com/t5/LabVIEW/Since-upgrading-la​bview-many-sub-vi-s-such-as-DIO-Port-Config-vi/m-p​...
    Although the recommendation in this thread is to use the same LV version and driver as the previous machine, we know it is not your case.  Newer LV version... and maybe the latest Traditional DAQ drivers...  May be worth reading:
    http://forums.ni.com/t5/LabVIEW/An-error-occurred-​loading-VI-AI-Group-Config-vi/m-p/734547
    Traditional DAQ 7.4.4
    http://digital.ni.com/softlib.nsf/websearch/E42B7A​3564AB2F1A862572A7006D1564?OpenDocument

  • How the system will know the procrument type weather it is E or F

    Hello Frinds ,
    i am creating material  A  by using material  B  , materal B is created inhouse also & Procrued from ouside also
    Now i want Produce material A from inhouse produced material B ,how the input should be given to know the system that it is inhouse produced material?
    2 for that is any data need to be maintained
    3 is valuation category play a role to know the systm ?
    4 if yes how to maintain multiple valuation category for a same material?
    Regards
    Girish

    Hi Girish,
    You have to maintain split valuation for your material if you want to make difference between internally and externally procured material.
    http://www.sap123.com/showthread.php?t=29
    http://help.sap.com/saphelp_47x200/helpdata/en/47/6101a549f011d1894c0000e829fbbd/frameset.htm
    You have to use a separate valuation type both for inhouse and purchased goods.
    Even though you set split valuation for your goods SAP will consider both kind of the products as available for production.
    You should choose the suitable valuation type for your production order manually (maybe there is other way I don't know about).
    Regards,
    Csaba
    Edited by: Csaba Szommer on Nov 14, 2008 11:28 AM
    Edited by: Csaba Szommer on Nov 14, 2008 11:31 AM

  • How can I flash back off on UNDO tablespace?

    I have used flashback on ( oracle database), then I have flashed back off some tablespaces.
    I have some question:
    Can I flashback off on UNDO tablespace?
    When I flashback on Undo tablespace, that'll impact to flashback feature or not?
    Thank you.

    TEST:
    SQL> flashback database to timestamp to_timestamp('2008-05-20 19:13:00', 'YYYY-MM-DD HH24:MI:SS');
    flashback database to timestamp to_timestamp('2008-05-20 19:13:00', 'YYYY-MM-DD HH24:MI:SS')
    ERROR at line 1:
    ORA-38753: Cannot flashback data file 5; no flashback log data.
    ORA-01110: data file 5: '+DATA/testdb/datafile/example.1075.651074487'
    ORA-38753: Cannot flashback data file 3; no flashback log data.
    ORA-01110: data file 3: '+DATA/testdb/datafile/undotbs1.1086.651074273'
    SQL> alter database datafile 5 oflfine;
    alter database datafile 5 oflfine
    ERROR at line 1:
    ORA-01916: keyword ONLINE, OFFLINE, RESIZE, AUTOEXTEND or END/DROP expected
    SQL> alter database datafile 5 offline;
    Database altered.
    SQL> flashback database to timestamp to_timestamp('2008-05-20 19:13:00', 'YYYY-MM-DD HH24:MI:SS');
    flashback database to timestamp to_timestamp('2008-05-20 19:13:00', 'YYYY-MM-DD HH24:MI:SS')
    ERROR at line 1:
    ORA-38753: Cannot flashback data file 3; no flashback log data.
    ORA-01110: data file 3: '+DATA/testdb/datafile/undotbs1.1086.651074273'
    SQL> alter database datafile 3 offline;
    Database altered.
    SQL> flashback database to timestamp to_timestamp('2008-05-20 19:13:00', 'YYYY-MM-DD HH24:MI:SS');
    Flashback complete.
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01092: ORACLE instance terminated. Disconnection forced
    Process ID: 18585
    Session ID: 115 Serial number: 5

  • Difference Between Redo logfiles,Undo Tablespace,Archive logfiles, Datafile

    Can some please highlight difference between Redo and Undo logs
    Also why do we need seperate archive logs when the data from redo logs can be directly written to Datafiles..
    You help will be highly appreciated...

    Also why do we need seperate archive logs when the data from redo logs can be directly written to Datafiles..
    Redo logs are the transaction logs of yours database , it is exactly a recorder of yours database activity ,by using redo logs i can take my database to a particular state, for further detail lets take an example.
    Suppose i have three disk (disk1,disk2 and disk3),i have one datafile in my database which is f1.dbf on my disk1,i have 2 redo log file i.e r1.log and r2.log which are on disk2.At disk3 i copy my database file f1.dbf from disk1 daily at 12:00 PM. On Monday i have successfully copied the f1.dbf file from disk1 to disk3 at 12:00 PM , at 1 PM on monday i insert some record in a table "test".
    Insert into test values (1)
    It means on monday at 1:00 PM within my datafile f1.dbf i have modified a block and this block contain the record of table test (1 values) , at 1:00 PM if you compare datafile f1.dbf at disk1 from disk3 , you will find only that f1.dbf will contain a new record of test table with values 1 but this will not be within backed up datafile f1.dbf at disk3 . At disk2 r1.log contain those record which i inserted on monday at 1:00 PM, it means i have redo (transactions log) of this datafile f1.dbf within r1.log, i can redo (again inserting) that record from redo log r1.log which i have done at 1:00 PM.
    Suppose at 2 PM on monday my disk1 smoked out , if disk1 smoked out f1.dbf has been also smoked out , then what will i do , i will put a new hard disk disk4 and will copy the database file f1.dbf from backup disk3 , this f1.dbf would not have record of "Insert into test values (1)" which occured after backup at 1:00 PM.After coping f1.dbf to new hard disk disk4 i will start reading and applying my redo log file r1.log after 12:00 PM which contain the (redo entry i.e redo vectors) "insert into test values (1)" to my new restored file f1.dbf at new disk4 , in short it is going to replay from recorder of my transactions log (r1.log) and after applying the redo from r1.log i will have exact data before crashing the hard disk1 at 2:00 PM.The only purpose of redo log file in this scenario is to recover the database in case of media failure , if i do not have r1.log then i can only recover my database till 12:00 PM which was copied from disk1 to disk3 but this recovered backup file does not have entry of "insert into test values (1)", i can only bring back my database to before failure current backup at 12:00 PM not just exact before the failure.
    You may think why the wastage of three disk for one information , one is at disk1 (f1.dbf),another one is at disk2 (r1.log) and last one is at disk3 for coping f1.dbf from disk1.You may think that it is ok with two disk (disk1 and disk3) why do we need another storage area for r1.log , basically r1.log is the transactions log until between yours current backup to the next backup.It does not contain the whole redo (transaction logs) of yours database but have information (transaction log) till yours next backup which is going to be start on next day tuesday at 12:00 PM ,when it gets filled it is switched to another redo log r2.log, when r2.log is filled then it switched back to r1.log , it overwrites whatever written at 12:00 PM "insert into test values (1)" with new database transactions , for the safety of this overcome this redo is archived and keep save till yours next backup.Redo log switching depneds on yours database transaction pace and its size, the more database activity the quickest redo filling and in turn redo log switching.
    Thats all why we need redo and archived log files, as far as the data from redo logs can be directly written to Datafiles is wrong , whenever you do any activity i.e DML , this DML modify the two block one is in within DB buffer cache another one in redo buffer cache , at every 3 seconds or 1/3 filing of redo buffer or commit occurs whichever occur first LGWR write transactions log from redo buffer to online redo log file.
    DBWR writes data (dirty buffer) from database buffer cache when checkpoint occurs (checkpointing a process which updates alls datafile headers with the SCN) before Oracle 8.0 it was LGWR which does the job of CKPT to update alls datafile header with SCN , but after Oracle 8.0 CKPT is now responsible to do this activity cause LGWR already keep involved in writing redo buffer data to redo log file.

  • How much space will be allocated for Occurs 10

    Hello,
    Kindly clarify my below queries?
    1) if occurs 0 takes minimum 8 kb space
    then how much will occurs 10 will take .
    i.e when to use occurs 0 ,occurs 10 ,occurs 100?
    2) what is the maximum number of lines an internal table can contain ?
    Regards,
    Rachel

    Hi,
    <li>OCCURS <n>. The below information answers both questions.
    This size does not belong to the data type of the internal table, and does not affect the type check. You can use the above addition to reserve memory space for <n> table lines when you declare the table object.
    When this initial area is full, the system makes twice as much extra space available up to a limit of 8KB. Further memory areas of 12KB each are then allocated.
    You can usually leave it to the system to work out the initial memory requirement. The first time you fill the table, little memory is used. The space occupied, depending on the line width, is 16 <= <n> <= 100.
    It only makes sense to specify a concrete value of <n> if you can specify a precise number of table entries when you create the table and need to allocate exactly that amount of memory (exception: Appending table lines to ranked lists). This can be particularly important for deep-structured internal tables where the inner table only has a few entries (less than 5, for example).
    To avoid excessive requests for memory, large values of <n> are treated as follows: The largest possible value of <n> is 8KB divided by the length of the line. If you specify a larger value of <n>, the system calculates a new value so that n times the line width is around 12KB
    Thanks
    Venkat.O

  • How the storage unit get populated automatically in TO during GR?

    Hi,
    I have a question to ask you with regards to SU in WM.
    I perform a goods receipt against a production order in MIGO.
    and I notice the Transfer Requirement get created and the Transfer Order also get created automatically in the background.
    My question is the system also automatically assign a storage unit in the transfer order (LTAP-NLENR), may I know what system configuration is requre to setup such automatic SU created for the TO, where in SPRO?
    In between this is a "Bulk Storage" Strategy used in this storage type.
    thanks
    tuff

    If your destination storage type is SU managed and you have an internal number range assigned then system will automatically propose the the SU number (trhe next number from the number range)

  • How the index will be fired...............

    Hi All,
    I have one problem.........
    Like i created an index on two columns as follows......
    CREATE INDEX gpm_personprofiledemo_dob2_idx ON gpm_personprofiledemo
    (profile_updated_dt, birth_dt)
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    TABLESPACE gpmix
    STORAGE (
    INITIAL 104857600
    NEXT 52428800
    PCTINCREASE 0
    MINEXTENTS 2
    MAXEXTENTS 5
    When I query the table as follows the index is getting fired..
    SELECT /*+ INDEX(PPD GPM_PERSONPROFILEDEMO_DOB2_IDX) */
    PPD.PANELIST_ID, PPD.FNAME, PPD.LNAME, PPD.GENDER, PPD.BIRTH_DT, ROUND(TRUNC(SYSDATE-PPD.BIRTH_DT)/365) AGE,
    PPD.HISPANIC_LANGUAGE, PPD.PROFILE_UPDATED_DT
    FROM GPM_PERSONPROFILEDEMO PPD
    WHERE PPD.PROFILE_UPDATED_DT = TO_DATE('28-APR-2002', 'DD-MON-YYYY')
    AND PPD.BIRTH_DT = TO_DATE('15-OCT-1973', 'DD-MON-YYYY');
    Even when I tried with the following query also the index is getting fired.
    SELECT /*+ INDEX(PPD GPM_PERSONPROFILEDEMO_DOB2_IDX) */
    PPD.PANELIST_ID, PPD.FNAME, PPD.LNAME, PPD.GENDER, PPD.BIRTH_DT, ROUND(TRUNC(SYSDATE-PPD.BIRTH_DT)/365) AGE,
    PPD.HISPANIC_LANGUAGE, PPD.PROFILE_UPDATED_DT
    FROM GPM_PERSONPROFILEDEMO PPD
    WHERE PPD.PROFILE_UPDATED_DT = TO_DATE('28-APR-2002', 'DD-MON-YYYY');
    But when I use the following query, its not getting fired. Its taking "INDEX FULL SCAN"
    SELECT /*+ INDEX(PPD GPM_PERSONPROFILEDEMO_DOB2_IDX) */
    PPD.PANELIST_ID, PPD.FNAME, PPD.LNAME, PPD.GENDER, PPD.BIRTH_DT, ROUND(TRUNC(SYSDATE-PPD.BIRTH_DT)/365) AGE,
    PPD.HISPANIC_LANGUAGE, PPD.PROFILE_UPDATED_DT
    FROM GPM_PERSONPROFILEDEMO PPD
    WHERE PPD.BIRTH_DT = TO_DATE('15-OCT-1973', 'DD-MON-YYYY');
    Can you explain me why so?
    Like what am thinking is i created the index on "profile_updated_dt, birth_dt" but not on "birth_dt, profile_updated_dt".
    Is there any preference for the order of columns during the creation of the INDEX.
    Am stuck with this issue. And am unable to solve it. Can any one help me out?
    Because its very urgent.
    Thanks in advance
    Sudarshan

    But when I use the following query, its not getting fired. Its taking "INDEX FULL SCAN"I would have thought that the later part of the above statement contradicts with the earlier part of the statement.
    its not getting firedcontradicts this statement:
    Its taking "INDEX FULL SCAN"Since the birth_dt is not the leading column of your index ...
    Is it doing a INDEX FULL SCAN on the GPM_PERSONPROFILEDEMO_DOB2_IDX index?
    if you have queries that involve profile_update_dt + birth_dt and some others that involve only birth_dt, then why not have the index in birth_dt + profile_update_dt instead?

  • How to create DFF and how the values will be stored in the database

    Dear All,
    I have created DFF in OA Framework.
    That is working fine.
    But i dont know how to save values of DFF.
    do i need to attach EO with the View object which is used in DFF.
    I would be thankful to you.
    Thanks,
    sheetal mittal

    Hi Gorav,
    Actually its seeded page so i cannot do anything with teh save functionality..
    I have followed the below steps:
    1) Fst i have added Flex Field in the Seeded Page . Defined all the properties , View object name etc.
    2) After that i have created new custom View Object and entity object then add that EO in the custom VO..
    3) After that i have extended AM to add new custom View Object.
    4) After that i have extended CO and written following code :
    public void processRequest(OAPageContext pageContext, OAWebBean webBean)
    super.processRequest(pageContext, webBean);
    OAApplicationModule am = pageContext.getApplicationModule(webBean);
    NegotiationCreationAM negotiationcreationam = (NegotiationCreationAM)pageContext.getApplicationModule(webBean);
    int StrAucId = negotiationcreationam.getAuctionHeaderId();
    String strAuctionId = String.valueOf(StrAucId);
    Serializable[] Params = {strAuctionId};
    am.invokeMethod("InitLoad",Params);
    public void processFormRequest(OAPageContext pageContext, OAWebBean webBean)
    super.processFormRequest(pageContext, webBean);
    NOTE: AuctionHeaderId is the primary key of the page.
    5) I ahve written the below code in AM.
    public void InitLoad(String strHeaderId)
    OAViewObject vo = (OAViewObject)getxxEarnestDffVO1();
    if (vo.getRowCount() == new Number(0).intValue())
    Row row = vo.createRow();
    vo.setWhereClause(" AUCTION_HEADER_ID = " + strHeaderId);
    vo.executeQuery();
    else
    vo.setWhereClause(" AUCTION_HEADER_ID = " + strHeaderId);
    vo.executeQuery();
    6) But when i am saving the details it gives me :
    Unable to perform transaction on the record.
    Cause: The record contains stale data. The record has been modified by another user.
    Action: Cancel the transaction and re-query the record to get the new data.
    Please help me out.
    regards,
    sheetal

Maybe you are looking for

  • Why is HDR merge in LR CC so s-l-o-w?

    Shot a 5 shot series, plus or minus 2, plus or minus 1 and normal with Olympus OM-D E-M1, off a tripod. The RAWs are "ORF" format. With Auto Align, Auto Contrast and deghosting turned off, it took about 90 seconds to get the preview up to merge. Merg

  • ACS 4.2.1.15

    Hi, I need a link to: ACSse-Upgrade-Pkg-acs-v4.2.1.15-K9.zip and ACSse-v4.2.1.15-Docs. Thanks, Ken                  

  • "Error 0x400110020000100A If this issue continues, please contact HP support"

     I have purchased the system recovery disks from HP for my dr9225nr pavilion and purchased a new hard drive at their request because my first attempt to install the recovery image ended with the same error code. So I now have a new hard drive the HP

  • Any view which tells partitioning_type is INTERVAL and not RANGE

    Hi, I know that INTERVAL is a kind of RANGE partition. But if I want to find out explicitly from a 'SINGLE SQL STATEMENT' whether table is Interval partiioned or Range Partitioned. I tried: Select table_name, partitioning_type, partition_count from U

  • I need to grab the UPC for Each and place it under Case unit

    Here's what I have.... Product:      UPC              Unit of Measure 12005         123456           EA 12005          789000          CS I want it to look like this: Populate the EA upc into CS upc, basically swap it. Product:        UPC