Limitation of IN clause

Hi Gurus,
Can't seem to find the right answer by just googling. Anyone knows how many values (and the limit) I can pass to the IN clause like the example below
select number_id from foo where number_id IN (1,2,3,4,5,6,7,...AND SO THE LIST GOES)
Will oracle be able to perform and optimize the query well if I am passing 800 or more values in the IN clause? Or should I store the data in a table or nested table then join it to the other table.
Pls advise.
rgds,
guest

Hi,
user12868294 wrote:
Hi Gurus,
Can't seem to find the right answer by just googling. Anyone knows how many values (and the limit) I can pass to the IN clause like the example below
select number_id from foo where number_id IN (1,2,3,4,5,6,7,...AND SO THE LIST GOES)
The limit is 1000 in Oracle 10.
For discussion ad absurdam see the following threads:
In Oracle 10g, what is the max number of values we can pass to  'IN' Clause
Re: IN / NOT IN limit
Will oracle be able to perform and optimize the query well if I am passing 800 or more values in the IN clause? Or should I store the data in a table or nested table then join it to the other table.Yes, if you have anywhere near the limit, you're probably better off storing the numbers in a table.

Similar Messages

  • Using if logic in the where clause of a select statement

    I have a select clause. And in the select clause there is a variable all_off_trt that can be 'Y' or 'N'.
    In the where clause I want to make it so that if a form variable is checked and all_off_trt is 'Y' then
    exclude it else if the form variable isn't checked then select it no matter what all_off_trt is.
    Is there any way to include either and if statement or a case statement within the where clause to acheive this? If not is there another way of doing it?
    Basically I am looking for a case statement like this
    case
    when all_off_trt = 'Y' and mail_para.code = 'Y' then false
    else true
    end
    Message was edited by:
    Tugnutt7

    Ok, so that really doesn't solve my problem. I have 3 different fields that I need to do that with. Each combining in a select statement to print an email list, as well as other thing limiting the where clause.
    This is currently what I have, tested and working 100%.
    cursor email_cur is
         select unique p.email,s.all_off_trt,s.all_deceased,s.no_enroll
    from participant p, trialcom t, ethics s
    where p.status='A'
    and p.surname=t.surname
    and p.initials=t.initials
    and s.trial_cd = t.tricom
    and s.centre = t.centre
    and p.email is not null
    and (t.centre in (select code from mail_parameters where user_name=user and mail_para='CENTRE')
    or 'XX' in (select code from mail_parameters where user_name=user and mail_para='CENTRE'))
    and (t.tricom in (select code from mail_parameters where user_name=user and mail_para='TRIAL')
    or 'XX' in (select code from mail_parameters where user_name=user and mail_para='TRIAL'))
    and (t.role in (select code from mail_parameters where user_name=user and mail_para='ROLE')
    or 'XX' in (select code from mail_parameters where user_name=user and mail_para='ROLE'))
    and (p.country in (select code from mail_parameters where user_name=user and mail_para='COUNTRY')
    or 'XX' in (select code from mail_parameters where user_name=user and mail_para='COUNTRY'))
    and (t.represent in (select code from mail_parameters where user_name=user and mail_para='REPRESENT')
    or 'XX' in (select code from mail_parameters where user_name=user and mail_para='REPRESENT'));
    This is in a program unit that runs when a button is clicked. At the end of that I need to add on the 3 case statements that help further narrow down the selection of emails to be printed. Then it prints the emails selected from this statement into a file. So it has to be done right in the select statement. The three table variables are the all_off_trt, all_deceased, and no_enroll. The form has 3 checkboxes. One for each, that when checked (giving the variable associated with the checkboxes a value of 'Y') excludes all emails that have a 'Y' in the coresponding table variable.

  • The Definitive Amex 3X CLI Guide

     Note:  while this was at one time the correct guide, recent changes in policy have changed.  There will be updated information coming soon. One of the most popular topics of discussion on the forum is the American Express 3X CLI. Here is a small guide I put together to alleviate some misconceptions based on personal experiences and discussions here and other boards. Please feel free to suggest modifications or enhancements. Please don't copy/paste the information to other boards. Linking here is fine.  
    What is the 3X CLI?
    =========================================================== American Express is one of the most sought after credit cards in the market. They are also typically one of more generous companies with respect to credit limits outside of the credit unions. For folks who do not qualify for NFCU and other credit unions known to issue high credit limits despite low FICO scores and or moderate credit history, the best way to get high(er) CLs is to first get accepted into American Express and then apply for an increased credit limit upto 3 times the existing approved limit after a certain minimum number of days have passed (discussed in details below). This really comes in handy when the initial approved limit is on the lower side (example $2,000) due to internal American Express risk score models, mostly for folks starting out in their credit journey or with other risky credit/income factors. After completing the "60" day magic number of the account being open and in good standing, one can request an increased credit limit upto 3X the initial limit starting with the 61st day, meaning someone who got approved for $2,000 could request that the new credit limit be $2,000 x 3 = $6,000, someone who got approved for $3,000 initially could request the new credit limit be $3,000 x 3 = $9,000 and so on.  Why is the 3X CLI Important?
    =========================================================== Credit Utilization is one of the most important components of the FICO score (30% per official wording from FICO).Given the same amount of spending, with higher available credit, utilization is lower and hence FICO scores are higher. But more importantly this is a gateway to getting approved for other prime cards/rewards cards like Visa Signature, MasterCard World, points, cashback etc offered by various lenders like Chase, Citi, BoA, US Bank etc who typically issue them if they see an existing high limit card. Higher limits from one lender attracts higher limits from others up until the overall available credit reaches a certain threshold. No other lender is known to have a 3X CLI policy and that too as early as the 61st day. One may be able to procure a CLI from other lenders but typically it is a hard inquiry or sometimes flat out denial because the account is considered too new. Amex is the only lender that offers this. More recently though GEMB issued credit cards are also known to get generous increases (sometimes even > 3X initial limits) via a Soft Pull. This policy is relatively new but widely reported to be successful across different boards.  Is the 3X CLI automatic or guaranteed?
    =========================================================== Most definitely not. The components factored into a CLI approval or denial are not published but widely accepted to be similar to the FICO models. Amex also asks for income and rent during the CLI request process. Other factors remaining same, chances of a CLI approval are higher if the account has always been in good standing e.g lower utilization, low risk spending patterns, on time payments etc. The 3X CLI request has to be manually initiated by the account holder after becoming eligible. 
    Will there be a hard inquiry when requesting 3X CLI?
    ===========================================================
    Fortunately not. American Express initiates soft pulls on Experian throughout the life of an open account with random frequency. During the first 60 days, it is not uncommon for Amex to have soft pulled at least 3-4 times. The CLI approval or denial will be based on the last soft pull before the CLI request and other factors like income/rent etc. A new hard pull or soft pull is not initiated when requesting the 3X CLI. If in doubt, get a copy of your Experian report and see when Amex issued a soft pull to gauge account statistics at that snapshot and request CLI accordingly. Post 186 explains how to get your experian report and check for soft pulls. If there are concerns that the overall utilization or Amex utilization was high during the last soft pull, it might be worth waiting a few more days and then applying for CLI after Amex has issued a new soft pull. 
    When does an acccount become eligible for CLI?
    =========================================================== Amex does NOT increase an account's credit limit during the first 60 days after it has been OPENED, or for at least 6 months from the last credit limit increase. The key word here is OPENED aka the account creation date. There are a few ways to find out the exact date the account was opened. a) Call the number on the back of the card and simply ask the friendly customer service rep. There is nothing to be worried about here or get nervous. You are not asking for a CLI to the rep, but simply the day the account was opened. This can be done during card activation itself or any point later. Some friendly CSRs will also calculate and mention the first eligible date for CLI (61st day since account opening). If they are not aware of the first eligible date, simply ask for account opening/creation date and calculate it yourself. b) "Date Opened" field on your Equifax or TransUnion credit report. Both EQ and TU report the account opened day and month accurately (Equifax started doing this recently). 3rd party credit pullers may or maynot pull the date opened field accurately, the official credit reports are the only reliable source. It is important to note that the year field in "Date Opened" is affected by your American Express Member Since date, so may not reflect the year the card was originally opened. The only downside is one has to wait for the card to report to EQ and TU first, but Amex is usually very prompt to report.
    c) Before the first statement has been cut, the account opened date is reflected under the Recent Charges field next to the "Since". Example in this pic, the account opened date is Jan 08. The "Since" field changes every statement period as it starts tracking the spending for that statement, so it is a good idea to get the account open date information using this method as early as possible. Existing Amex card members with an online profile are at an advantage to use this method because any new approved cards show in the profile before the card has been delivered physically, displaying several key account details like the account open date, first statement closing date, last 5 digits of card etc. First time Amex card members should try to setup the online profile as early as possible after physically receiving the card(s) to take advantage of this method as the first statement can sometimes close very soon after receiving the card(s). 
    Example calculation of 1st eligibility date taking the above screenshot as an example (Account opened : 01/08/2013). The account will complete 60 days on March 9, 2013 based on the following calculations:
    Number of days by January 31 : 23
    Number of days by February 28 : 28
    Number of days by March 9 : 9
    Total days open by March 9 : 60 In this example the first eligible date for CLI is March 10, provided there have been no other CLIs for any other accounts in the last 6 months, as the 60 days are completed by March 9th,  A better and easier way to calculate Amex milestones has been explained in Post 227. Using the time and date calculator, we see the same result that we got from the manual calculation above. 
    How to request the CLI?
    =========================================================== a) Calling the number on the back of the card. The number is unique to each credit product and is automated. Usually the option to request a credit limit increase is option #4 or #5, or if you are comfortable a CSR can take the application and let you know approval/denial once they submit. If you are denied, they may not be able to give the specific reason and you have to wait for an email or a letter in the mail outlining the reasons. b) Logging into your account. Click on "Profile and Preferences -> More Options -> Manage Credit Limit" under Account Management.  It is also available by clicking "Manage My Account"(Toggled to "HIDE ACCOUNT OPTIONS" in the pic below) next to the "Outstanding Balance/Available Credit" on main page    In both cases you want to request the new credit limit itself (e.g $6,000 if initial limit is $2,000). Don't calculate the increase ($6.000 - $2,000 = $4,000) yourself and mention it. 
    What happens if you are denied CLI?
    =========================================================== If you jumped the gun before the 60 days/6 months rule, the reason for denial is considered ineligibility. If that is the case you can reapply after the original and correct 60day/6month timeline. You do not have to wait an additional 90 days, so the clock doesn't get reset. If you were eligible and got denied due to other factors, then you are ineligible to reapply for another 90 days since the denial date. If you get the message after requesting CLI, that a decision will be mailed within 7-10 days, it most likely means that the CLI has been denied. There is an outside chance that the letter that comes from Amex may ask for a signed 4506T in order to give further consideration to the CLI request(details below) but in 99% of the cases it is a letter outlining the reasons for denial. It also provides the true FICO score from the bureau that was used in the decision making process. Most of the time it is Experian FICO, but Amex is known to use EQ and or TU in rare circumstances. Sometimes the reason(s) for denial is also sent via email immediately after the CLI request in addition to the letter in mail. Update 06/23/2013 : Lately there have been reports of approvals in emails within a 24 hr period after initially getting the 7-10 day message. The 7-10 day message indicates manual review and there has been a report of successful partial CLI after talking to CSR post 7-10 day message as well in Post 182  Can you request 3X CLI again after 6 months since last CLI?
    =========================================================== Absolutely. There have been lots of success stories where people have gone from $2,000 to $6,000 to $18,000 because their income and credit profile supported the increase request. It is generally tougher to get the full 3X CLI after 6 months on the same account since the first CLI however. Even if a 3X CLI is requested, Amex may counter offer a partial increase which you can choose to accept or refuse. It is a good idea to not short change yourself and ask for a partial increase from the beginning because Amex system is smart enough to offer and approve a partial CLI if eligible. Amex may request documentation to support a CLI to a CL above $25K, including a request for a 4506T.  Accounts are not suspended, nothing is mandatory. They may ask for documents, and if you don't present them, they may deny the CLI. More details below.  Additional Documentation/4506T for certain CLI Requests
    =========================================================== For certain CLI requests, Amex sends a letter requesting a signed 4506T from the card holder. Amex can then obtain tax return transcripts from IRS in order to verify income. Here is a sample tax return transcript which shows what kind of information can be obtained via a signed 4506T. A tax return transcript has far more details than just income and hence is considered very intrusive by many. If Amex does not receive the signed 4506T within 30 days from the date on the letter, then no further consideration is given to the CLI request and a letter of denial is sent. No CLI can be requested for 90 days from the date in the denial letter. Sending back the signed 4506T also doesn't guarantee a CLI approval as Amex will also evaluate the usual credit factors via soft pulls on Experian in addition to income verification. Income to overall credit available across all accounts(amex and non-amex) is one of the key factors for Amex Internal Risk Score. Lower risk score improves chances of getting approved for the CLI. The following often (but not always) are potential reasons why Amex asks for 4506T for a CLI request.  New requested credit limit exceeds $25,000 for that particular account.New requested credit limit would make overall credit extended across all Amex revolving accounts exceed $34,500 (post 123)High internal risk score due to income on file to overall credit across amex and non-amex accounts ratio being too low.Unusual/large/manufactured/risky spending patterns. It is important to note that this whole process,although inconvenient/complicated/intrusive, is different from the dreaded Amex Financial Review. During an Amex FR, all accounts are suspended and sending back the signed 4506T is mandatory in order to be considered for lifting the suspension. For business accounts, even more documentation is required during an FR. Also an FR can happen at any time and not necessarily after requesting an increase in credit limit.    Some generic Amex CLI Guidelines
    =========================================================== Requesting CLI > 3X current limit results in automatic denial. Letter in the mail indicates that credit limit cannot be greater than three times the current limit. So American Amex credit limits are capped at 200% increase. Update: Requesting CLI > 3X no longer results in automatic denial. If the account is eligible for CLI, then new limit is counter offered keeping the 200% cap in effect(meaning CLI counter offered will not exceed 3X current limit under any scenario)Canadian Amex are capped at 50% increase every 6 months (Post 45)Minimum 6 month since last CLI rule applies across ALL accounts held by an individual and includes both personal & business accounts for the same person. So if CLI was successful on a personal account, no CLI can be approved on all other personal and business accounts for the same person for a period of 6 more months. (Posts 5 and 6 of the AmEx 61 day CLI result: 7-10 day written response thread)Credit limits can be moved between accounts only after the completion of 13 statements cycles on both the donor account. Post 170 explains how to move limits between Amex cards. Additionally limits cannot be moved between personal and business cards as mentioned in Post 171 Moving credit limits doesn't reset the 6 month clock and is a backdoor to receiving CLI on same account twice within 6 months (Post 128)"6 months" timeline used in context of this guide actually is 180 days to be more precise (not 6 calendar months)  

     Carrying Balance vs PIF & Other Utilization Factors=========================================================== This section is a work in progress.  To see how charge cards affect credit scores, please check Post 204Amex clearly dislikes carrying balance. Check Post 189 to see more details and the official reason for denial as "one or more payments received in the past 12 months on your american express account were too low given the outstanding balance and repayment history"          Change Log
    =========================================================== 06/23/2013: All changes since last update marked in RED Added method to check Amex softs in official EX reportAdded time and date calculatorAdded verbiage to 7-10 message section to reflect recent experiences.Modified CLI > 3X point in generic CLI guidelines.Modified verbiage for moving credit limits to reflect 13 statement cycles for donor account only. Added links on how to move credit limits and restriction clause to move limits between personal and business cards.Added a new work-in-porgress Carrying Balance vs PIF section.
    03/30/2013: "Letter in 7-10 business days" interpretation added in "What happens if you are denied CLI?" section thanks to Revelate03/29/2013: Modified verbiage on method (c) to find account eligibility date, corrected some typos, added new 4506T section, updated generic CLI guidelines.01/26/2013: a) Added section "Some Generic CLI Guidelines") b) Added pic for another way to access CLI online c) Modified verbiage in "What happens if denied CLI".01/14/2013: Minor edits in formatting and ready for sticky.01/13/2013: Initial document. Changes incorporated via posts from bradpitt (GEMB), OptimasPrime (cropped pic) and Walt_K (4506T documentation)  

  • Maximum number of selections in an info package

    Hi friends,
    i want to load an ODS with data from MSEG. Due to the great number of records I've to select by
    0ATERIAL. Selection criteria are provided by a routine I'll write for this selection, reading a different
    ODS. Estimated number of records for selection is about 80,000.
    My question:
    Is there any restriction regarding the number of selection criteria of an Infoobject in info packages?
    Will a selection work with 80,000 criteria?
    Any input is highly appreciated.
    Thanks in advance and regards
    Joe

    Hello,
    If I understood correctly...you will compare the values from a DSO and then pass these values in the infopackage selections..
    but how are you planning to do it...will it be interval or single value??
    Also I think you can assign only one value or range in the infopackage at a time for selection through routine...
    More the number of selections more the number of AND in the where clause.
    I am not sure if there is any limitations on where clause but after 100 selection the select queries become complex and overflow the memory...so thats the limitation.
    Thanks
    Ajeet

  • Selecting Single Rows that match to a column within group function output

    Trying to write a Query that will look through a data set that will return the Barcodes of CompoundNames that have a summed Quantity > 500.
    So if it was ran against the sample table below the output would be
    0005
    0006
    0007
    0008
    0009
    0010
    Barcode, CompoundName, BatchId, Quantity
    0001, XE 1000, XE 1000 100, 100
    0002, XE 1000, XE 1000 101, 100
    0003, XE 1000, XE 1000 102, 100
    0004, XE 1000, XE 1000 103, 100
    0005, XE 2000, XE 2000 100, 100
    0006, XE 2000, XE 2000 101, 100
    0007, XE 2000, XE 2000 102, 100
    0008, XE 2000, XE 2000 103, 100
    0009, XE 2000, XE 2000 104, 100
    0010, XE 2000, XE 2000 105, 100
    0011, XE 3000, XE 3000 100, 100
    I've got this far
    Select CompoundName, SUM(QUANTITY) FROM Table
    GROUP BY CompoundName
    HAVING SUM(QUANTITY) > 500)
    order by compoundname;
    But I need each Barcode that corresponds to each batchid when the summed quantity of the batches is > 500.
    TIA

    Replacing a GROUP BY Aggregate function by analytic equivalent (using PARTITION BY)
    will return every ROW (limited by where clause) but will not perform
    actual "aggregation operation.
    So it is possible that *selected result set* could contain duplicate row. Of course it depends on columns being seected and input data.
    +Ofcourse OPs sample data returns the same result with or without DISTINCT+
    For example...
    *WITH DISTINCT*
    {code}
    sudhakar@ORCL>with t1 as
    2 (select 0001 barcode,'XE0000' COMPOUNDNAME, 700 quantity FROM DUAL UNION ALL
    3 select 0003 ,'XE1000' , 20 FROM DUAL UNION ALL
    4 select 0003 ,'XE1000' , 280 FROM DUAL UNION ALL
    5 select 0003 ,'XE2000' , 50 FROM DUAL UNION ALL
    6 select 0003 ,'XE2000' , 100 FROM DUAL UNION ALL
    7 select 0003 ,'XE2000' , 150 FROM DUAL UNION ALL
    8 select 0003 ,'XE2000' , 200 FROM DUAL UNION ALL
    9 select 0003 ,'XE2000' , 750 FROM DUAL UNION ALL
    10 select 0003 ,'XE2000' , 120 FROM DUAL UNION ALL
    11 select 0003 ,'XE1000' , 70 FROM DUAL
    12 )
    13 select distinct * from
    14 (
    15 Select Barcode, CompoundName, SUM(QUANTITY) over (partition by CompoundName) sumqty
    16 FROM t1
    17 )
    18 where sumqty > 500
    19 order by compoundname;
    BARCODE COMPOU SUMQTY
    1 XE0000 700
    3 XE2000 1370
    sudhakar@ORCL>
    {code}
    *WITHOUT DISTINCT*
    {code}
    sudhakar@ORCL>with t1 as
    2 (select 0001 barcode,'XE0000' COMPOUNDNAME, 700 quantity FROM DUAL UNION ALL
    3 select 0003 ,'XE1000' , 20 FROM DUAL UNION ALL
    4 select 0003 ,'XE1000' , 280 FROM DUAL UNION ALL
    5 select 0003 ,'XE2000' , 50 FROM DUAL UNION ALL
    6 select 0003 ,'XE2000' , 100 FROM DUAL UNION ALL
    7 select 0003 ,'XE2000' , 150 FROM DUAL UNION ALL
    8 select 0003 ,'XE2000' , 200 FROM DUAL UNION ALL
    9 select 0003 ,'XE2000' , 750 FROM DUAL UNION ALL
    10 select 0003 ,'XE2000' , 120 FROM DUAL UNION ALL
    11 select 0003 ,'XE1000' , 70 FROM DUAL
    12 )
    13 select * from
    14 (
    15 Select Barcode, CompoundName, SUM(QUANTITY) over (partition by CompoundName) sumqty
    16 FROM t1
    17 )
    18 where sumqty > 500
    19 order by compoundname;
    BARCODE COMPOU SUMQTY
    1 XE0000 700
    3 XE2000 1370
    3 XE2000 1370
    3 XE2000 1370
    3 XE2000 1370
    3 XE2000 1370
    3 XE2000 1370
    7 rows selected.
    sudhakar@ORCL>
    {code}
    vr,
    Sudhakar B.

  • ORA-01467

    Hi All,
    I am getting following error with the pivoting query.
    Declare
    ERROR at line 1:
    ORA-01467: sort key too long
    ORA-06512: at line 30
    On some forum I got that this error is related with Datablock size and
    also that decode and case statment have limitation of 255 clauses ina
    query.
    my query is as follows.
    insert into gtt_sample_component_matrix1
    (     submission_id
    ,     sample_id
    ,     C1
    ,     C1_RESULT_ID
    ,     C1_NUMBER_VALUE
    ,     C2
    ,     C2_RESULT_ID
    ,     C2_NUMBER_VALUE
    ,     C66
    ,     C66_RESULT_ID
    ,     C66_NUMBER_VALUE
    select     submission_id
    ,     sample_id
    ,     max( decode( ref_component , 'C1_1149_1 component' , text_value )) C1
    ,     max( decode( ref_component , 'C1_1149_1 component' , C1_RESULT_ID )) C1
    ,     max( decode( ref_component , 'C1_1149_1 component' , C1_NUMBER_VALUE )) C1
    ,     max( decode( ref_component , 'C1_1149_2 component' , text_value )) C2
    ,     max( decode( ref_component , 'C1_1149_2 component' , C2_RESULT_ID )) C2
    ,     max( decode( ref_component , 'C1_1149_2 component' , C2_NUMBER_VALUE )) C2
    ,     max( decode( ref_component , 'C4_OQ Test Operation' , text_value )) C66
    ,     max( decode( ref_component , 'C4_OQ Test Operation' ,C66_RESULT_ID )) C66
    ,     max( decode( ref_component , 'C4_OQ Test Operation' , C66_NUMBER_VALUE )) C66
    from gtt_submission_rs group by submission_id, sample_id
    OR could any one suggest any alternative logic for pivoting.

    Oracle db version+release ?
    How many columns are included in your select statement ?
    Nicolas.

  • Technical help with dead new macbook

    Ok so my 9mth old macbook gave up the ghost for no apparent reason a week and a half ago. I have spoken to MacSupport and they have diagnosed a dead hard drive so suggest sending it in for repair. I've had a look on this forum and it seems that they will replace the hard drive but not recover the data from it for me. Is this correct? Where can I find a statement to that effect?
    Also, I have heard that as a user, I can replace the hard drive myself without voiding my warranty? Again, where can I find the literature to support this?
    This is our very first ever Mac and we want to do the right thing...and we just want our computer to be working again!
    Thanks for your help.
    Cas

    Apple is not in the data recovery business, because of the potential liability. In your Apple License (which you still might have), you can find it in the Limitation of Liability clause, which states that Apple will replace, not repair, a drive.
    http://docs.info.apple.com/article.html?artnum=31077
    It is safe and okay to swap drives without any harm to your machine, the same way you swap RAM. For a clear procedure and video:
    http://creativemac.digitalmedianet.com/articles/viewarticle.jsp?id=45088
    http://youtube.com/watch?v=8c6ckjy-gdY

  • Filter factory limitation on clauses?

    I have encountered what may be a limitation on the number of clauses in filter factory coding and would like someone to confirm or debunk it.
    I have written 12 lines of code, all similar, all containing three && "and" operators, and all ending with the || "or" operator except the last line which ends by concluding an overall "If-Then" statement. The caution icon comes on when the ninth line is added and one of the && signs gets highlighted as the error source. Yet, that line is basically no different in structure from any of the others. Only when I backspace out the ninth and subsequent lines does the caution disappear and the filter respond. 

    Here's the code in the R box:
    x<128  && y<128    && r>g  && g>b   && (128*b) >(r*x)  ||
    x<128  && y<128    && r>b  && b>g   && (128*g) >(r*x)  ||
    x<128  && y<128    && r>b  && b>g   && (128*g) >(r*y)  ||
    x<128  && y<128    && g>r  && r>b    && (128*b) >(g*x)  ||
    x<128  && y<128    && g>r  && r>b    && (128*b) >(g*y)  ||
    x<128  && y<128    && g>b  && b>r   && (128*r) >(g*x)   ||
    x<128  && y<128    && g>b  && b>r   && (128*r) >(g*y)   ||
    x<128  && y<128    && b>r  && r>g    && (128*g) >(b*x)  ||
    x<128  && y<128    && b>r  && r>g    && (128*g) >(b*y)  ||
    x<128  && y<128    && b>g  && g>r   && (128*r) >(b*x)   ||
    x<128  && y<128    && b>g  && g>r   && (128*r) >(b*y)  ? 255:r
    Works until I add the last three lines keeping, of course, the If_Then_Else conclusion "?255:r"
    If I add any one or two or all of the last three lines, the caution comes on.
    The G and B boxes are identical except they end with g and b, respectively. They behave the same way as the R box, as you would expect.

  • Oracle limitation? function created using "or replace" clause

    How I can know if function is created using "or replace" clause?
    I mean we can use "create function xxx" or "create or replace function xxx" to create function.
    when we check all_source (select text from all_source) to get the function body. The "or replace" clause is missing. Both are like "Function xxx".
    We are asked to be able to identify if the function create using "create" or "create or replace".
    what we should do? Is this Oracle limitation?

    That only works if nothing other than create create/replace happens tot he function over time. consider:
    SQL> CREATE TABLE t AS
      2  SELECT rownum id, TO_CHAR(TO_DATE(rownum, 'J'), 'Jsp') descr
      3  FROM all_objects
      4  WHERE rownum < 10;
    Table created.
    SQL> CREATE FUNCTION f (p_num IN NUMBER) RETURN VARCHAR2 AS
      2     l_v VARCHAR2(30);
      3  BEGIN
      4     SELECT descr INTO l_v
      5     FROM t
      6     WHERE id = p_num;
      7     RETURN l_v;
      8  END;
      9  /
    Function created.
    SQL> SELECT object_name, created, last_ddl_time, timestamp
      2  FROM user_objects
      3  WHERE object_name = 'F';
    OBJECT_NAM CREATED              LAST_DDL_TIME        TIMESTAMP
    F          09-aug-2010 09:45:39 09-aug-2010 09:45:39 2010-08-09:09:45:39As a brand new function, all dates agree, as expected. But now, I want someone to actually be abl to use this function so:
    SQL> GRANT EXECUTE ON f TO john;
    Grant succeeded.
    SQL> SELECT object_name, created, last_ddl_time, timestamp
      2  FROM user_objects
      3  WHERE object_name = 'F';
    OBJECT_NAM CREATED              LAST_DDL_TIME        TIMESTAMP
    F          09-aug-2010 09:45:39 09-aug-2010 09:47:00 2010-08-09:09:45:39Note that grant is ddl and it was performed on this function, so last_ddl is different, but I have not done a create or replace. The timestamp is still the same, so maybe that is a possibility? But ...
    SQL> ALTER TABLE t ADD (descr2 VARCHAR2(30));
    Table altered.
    SQL> SELECT status
      2  FROM user_objects
      3  WHERE object_name = 'F';
    STATUS
    INVALID
    SQL> SELECT f(1) FROM dual; -- auto recompile
    F(1)
    One
    SQL> SELECT object_name, created, last_ddl_time, timestamp
      2  FROM user_objects
      3  WHERE object_name = 'F';
    OBJECT_NAM CREATED              LAST_DDL_TIME        TIMESTAMP
    F          09-aug-2010 09:45:39 09-aug-2010 09:49:03 2010-08-09:09:49:03So, I have still not explicitly done a create or replace, but timestamp is now different. Does this auto recompile count as create or replace?
    John

  • Limitation that only 1000 records can be included in the "IN" clause

    In RapidSQL we have a limitation that only 1000 records can be included in the "IN" clause - is there a way to give more than 1000 records so that it will reduce the execution time.

    Why do you need to list more than 1000 individual items in the first place? That's generally not a good way to go about building a query.
    You can always include a query that returns as many rows as you'd like in an IN clause, i.e.
    SELECT *
      FROM some_table
    WHERE some_column IN (SELECT some_column FROM some_other_table)So if you throw the thousands of values you want in the IN list into a table, you could then query the table in your IN clause.
    From a performance standpoint, of course, you may also want to look at the EXISTS clause depending on the relative data volumes involved.
    Justin

  • 2GB OR NOT 2GB - FILE LIMITS IN ORACLE

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    2GB OR NOT 2GB - FILE LIMITS IN ORACLE
    ======================================
    Introduction
    ~~~~~~~~~~~~
    This article describes "2Gb" issues. It gives information on why 2Gb
    is a magical number and outlines the issues you need to know about if
    you are considering using Oracle with files larger than 2Gb in size.
    It also
    looks at some other file related limits and issues.
    The article has a Unix bias as this is where most of the 2Gb issues
    arise but there is information relevant to other (non-unix)
    platforms.
    Articles giving port specific limits are listed in the last section.
    Topics covered include:
    Why is 2Gb a Special Number ?
    Why use 2Gb+ Datafiles ?
    Export and 2Gb
    SQL*Loader and 2Gb
    Oracle and other 2Gb issues
    Port Specific Information on "Large Files"
    Why is 2Gb a Special Number ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Many CPU's and system call interfaces (API's) in use today use a word
    size of 32 bits. This word size imposes limits on many operations.
    In many cases the standard API's for file operations use a 32-bit signed
    word to represent both file size and current position within a file (byte
    displacement). A 'signed' 32bit word uses the top most bit as a sign
    indicator leaving only 31 bits to represent the actual value (positive or
    negative). In hexadecimal the largest positive number that can be
    represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
    This is ONE less than 2Gb.
    Files of 2Gb or more are generally known as 'large files'. As one might
    expect problems can start to surface once you try to use the number
    2147483648 or higher in a 32bit environment. To overcome this problem
    recent versions of operating systems have defined new system calls which
    typically use 64-bit addressing for file sizes and offsets. Recent Oracle
    releases make use of these new interfaces but there are a number of issues
    one should be aware of before deciding to use 'large files'.
    What does this mean when using Oracle ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The 32bit issue affects Oracle in a number of ways. In order to use large
    files you need to have:
    1. An operating system that supports 2Gb+ files or raw devices
    2. An operating system which has an API to support I/O on 2Gb+ files
    3. A version of Oracle which uses this API
    Today most platforms support large files and have 64bit APIs for such
    files.
    Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
    but the situation is very dependent on platform, operating system version
    and the Oracle version. In some cases 'large file' support is present by
    default, while in other cases a special patch may be required.
    At the time of writing there are some tools within Oracle which have not
    been updated to use the new API's, most notably tools like EXPORT and
    SQL*LOADER, but again the exact situation is platform and version specific.
    Why use 2Gb+ Datafiles ?
    ~~~~~~~~~~~~~~~~~~~~~~~~
    In this section we will try to summarise the advantages and disadvantages
    of using "large" files / devices for Oracle datafiles:
    Advantages of files larger than 2Gb:
    On most platforms Oracle7 supports up to 1022 datafiles.
    With files < 2Gb this limits the database size to less than 2044Gb.
    This is not an issue with Oracle8 which supports many more files.
    In reality the maximum database size would be less than 2044Gb due
    to maintaining separate data in separate tablespaces. Some of these
    may be much less than 2Gb in size.
    Less files to manage for smaller databases.
    Less file handle resources required
    Disadvantages of files larger than 2Gb:
    The unit of recovery is larger. A 2Gb file may take between 15 minutes
    and 1 hour to backup / restore depending on the backup media and
    disk speeds. An 8Gb file may take 4 times as long.
    Parallelism of backup / recovery operations may be impacted.
    There may be platform specific limitations - Eg: Asynchronous IO
    operations may be serialised above the 2Gb mark.
    As handling of files above 2Gb may need patches, special configuration
    etc.. there is an increased risk involved as opposed to smaller files.
    Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
    Important points if using files >= 2Gb
    Check with the OS Vendor to determine if large files are supported
    and how to configure for them.
    Check with the OS Vendor what the maximum file size actually is.
    Check with Oracle support if any patches or limitations apply
    on your platform , OS version and Oracle version.
    Remember to check again if you are considering upgrading either
    Oracle or the OS in case any patches are required in the release
    you are moving to.
    Make sure any operating system limits are set correctly to allow
    access to large files for all users.
    Make sure any backup scripts can also cope with large files.
    Note that there is still a limit to the maximum file size you
    can use for datafiles above 2Gb in size. The exact limit depends
    on the DB_BLOCK_SIZE of the database and the platform. On most
    platforms (Unix, NT, VMS) the limit on file size is around
    4194302*DB_BLOCK_SIZE.
    Important notes generally
    Be careful when allowing files to automatically resize. It is
    sensible to always limit the MAXSIZE for AUTOEXTEND files to less
    than 2Gb if not using 'large files', and to a sensible limit
    otherwise. Note that due to <Bug:568232> it is possible to specify
    an value of MAXSIZE larger than Oracle can cope with which may
    result in internal errors after the resize occurs. (Errors
    typically include ORA-600 [3292])
    On many platforms Oracle datafiles have an additional header
    block at the start of the file so creating a file of 2Gb actually
    requires slightly more than 2Gb of disk space. On Unix platforms
    the additional header for datafiles is usually DB_BLOCK_SIZE bytes
    but may be larger when creating datafiles on raw devices.
    2Gb related Oracle Errors:
    These are a few of the errors which may occur when a 2Gb limit
    is present. They are not in any particular order.
    ORA-01119 Error in creating datafile xxxx
    ORA-27044 unable to write header block of file
    SVR4 Error: 22: Invalid argument
    ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
    ORA-27070 skgfdisp: async read/write failed
    ORA-02237 invalid file size
    KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
    file limit exceed.
    Unix error 27, EFBIG
    Export and 2Gb
    ~~~~~~~~~~~~~~
    2Gb Export File Size
    ~~~~~~~~~~~~~~~~~~~~
    At the time of writing most versions of export use the default file
    open API when creating an export file. This means that on many platforms
    it is impossible to export a file of 2Gb or larger to a file system file.
    There are several options available to overcome 2Gb file limits with
    export such as:
    - It is generally possible to write an export > 2Gb to a raw device.
    Obviously the raw device has to be large enough to fit the entire
    export into it.
    - By exporting to a named pipe (on Unix) one can compress, zip or
    split up the output.
    See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
    - One can export to tape (on most platforms)
    See "Exporting to tape on Unix systems" <Note:30428.1>
    (This article also describes in detail how to export to
    a unix pipe, remote shell etc..)
    Other 2Gb Export Issues
    ~~~~~~~~~~~~~~~~~~~~~~~
    Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
    with EXPORT on many releases of Oracle such that if you export a large table
    and specify COMPRESS=Y then it is possible for the NEXT storage clause
    of the statement in the EXPORT file to contain a size above 2Gb. This
    will cause import to fail even if IGNORE=Y is specified at import time.
    This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
    An export will typically report errors like this when it hits a 2Gb
    limit:
    . . exporting table BIGEXPORT
    EXP-00015: error on row 10660 of table BIGEXPORT,
    column MYCOL, datatype 96
    EXP-00002: error in writing to export file
    EXP-00002: error in writing to export file
    EXP-00000: Export terminated unsuccessfully
    There is a secondary issue reported in <Bug:185855> which indicates that
    a full database export generates a CREATE TABLESPACE command with the
    file size specified in BYTES. If the filesize is above 2Gb this may
    cause an ORA-2237 error when attempting to create the file on IMPORT.
    This issue can be worked around be creating the tablespace prior to
    importing by specifying the file size in 'M' instead of in bytes.
    <Bug:490837> indicates a similar problem.
    Export to Tape
    ~~~~~~~~~~~~~~
    The VOLSIZE parameter for export is limited to values less that 4Gb.
    On some platforms may be only 2Gb.
    This is corrected in Oracle 8i. <Bug:490190> describes this problem.
    SQL*Loader and 2Gb
    ~~~~~~~~~~~~~~~~~~
    Typically SQL*Loader will error when it attempts to open an input
    file larger than 2Gb with an error of the form:
    SQL*Loader-500: Unable to open file (bigfile.dat)
    SVR4 Error: 79: Value too large for defined data type
    The examples in <Note:30528.1> can be modified to for use with SQL*Loader
    for large input data files.
    Oracle 8.0.6 provides large file support for discard and log files in
    SQL*Loader but the maximum input data file size still varies between
    platforms. See <Bug:948460> for details of the input file limit.
    <Bug:749600> covers the maximum discard file size.
    Oracle and other 2Gb issues
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    This sections lists miscellaneous 2Gb issues:
    - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
    - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in <Bug:710888>
    - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in <Bug:185855>.
    - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
    reports
    ORA-2187: invalid quota specification.
    This is documented in <Bug:425831>.
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.
    - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
    - Certain 'core' functions in Oracle tools do not support large files -
    See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that <Bug:749600> covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O
    - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within
    PL/SQL.
    Port Specific Information on "Large Files"
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Below are references to information on large file support for specific
    platforms. Although every effort is made to keep the information in
    these articles up-to-date it is still advisable to carefully test any
    operation which reads or writes from / to large files:
    Platform See
    ~~~~~~~~ ~~~
    AIX (RS6000 / SP) <Note:60888.1>
    HP <Note:62407.1>
    Digital Unix <Note:62426.1>
    Sequent PTX <Note:62415.1>
    Sun Solaris <Note:62409.1>
    Windows NT Maximum 4Gb files on FAT
    Theoretical 16Tb on NTFS
    ** See <Note:67421.1> before using large files
    on NT with Oracle8
    *2 There is a problem with DBVERIFY on 8.1.6
    See <Bug:1372172>

    I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
    Step 1
    Write a simple Java program like the one listed:
    import java.io.File;
    public class fileCheckUtl {
    public static int fileExists(String FileName) {
    File x = new File(FileName);
    if (x.exists())
    return 1;
    else return 0;
    public static void main (String args[]) {
    fileCheckUtl f = new fileCheckUtl();
    int i;
    i = f.fileExists(args[0]);
    System.out.println(i);
    Step 2 Load this into the Oracle data using LoadJava
    loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
    The output should be something like this:
    creating : source fileCheckUtl
    loading : source fileCheckUtl
    creating : fileCheckUtl
    resolving: source fileCheckUtl
    Step 3 - Create a PL/SQL wrapper for the Java Class:
    CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
    LANGUAGE JAVA
    NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
    Step 4 Test it:
    SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
    2 /
    FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
    1

  • Using bind variable with IN clause

    My application runs a limited number of straight up queries (no stored procs) using ODP.NET. For the most part, I'm able to use bind variables to help with query caching, etc... but I'm at a loss as to how to use bind variables with IN clauses. Basically, I'm looking for something like this:
    int objectId = 123;
    string[] listOfValues = { "a", "b", "c"};
    OracleCommand command = new OracleCommand();
    command.Connection = conn;
    command.BindByName = true;
    command.CommandText = @"select blah from mytable where objectId = :objectId and somevalue in (:listOfValues)";
    command.Parameters.Add("objectId", objectId);
    command.Parameters.Add("listOfValues", listOfValues);
    I haven't had much luck yet using an array as a bind variable. Do I need to pass it in as a PL/SQL associative array? Cast the values to a TABLE?
    Thanks,
    Nick

    Nevermind, found this
    How to use OracleParameter whith the IN Operator of select statement
    which contained this, which is a brilliant solution
    http://oradim.blogspot.com/2007/12/dynamically-creating-variable-in-list.html

  • To_Date function in the Where Clause

    Hello All,
    I'm having an issue using the to_date function that has me quite perplexed.
    I have two varchar2 fields, one with a date value in the format Mon, DD YYYY, the other has a time value in the format HH:MI PM.
    When I run my query one of the columns I retrieve looks like this TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM'). The two fields are concatenated together and converted to a date. This works fine.
    My problem occurs when I attempt to apply the same logic to the where clause of the aforementioned query. e.g. when I add the following criteria to my query and TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM') <= sysdate I get an ORA-01843: not a valid month error.
    To further illustrate my problem here are the two queries:
    Select d4.adate, e4.atime, TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM')
    from ....
    where ....
    The above query works.
    Select d4.adate, e4.atime, TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM')
    from ....
    where ....
    and TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM') <= sysdate
    The second query does not work.
    The tables used and the limiting criteria are identical, except for the last one.
    Does anyone have any ideas why this could be happening.
    er

    Hello,
    Check this out. It does work. Do cut n paste sample
    data from your tables.
    SQL> desc test
    Name Null? Type
    ID NUMBER
    DDATE VARCHAR2(20)
    DTIME VARCHAR2(20)
    SQL> select * from test;
    ID DDATE DTIME
    1 Jan, 10 2006 12:32 PM
    2 Mar, 11 2005 07:10 AM
    3 Apr, 13 2006 03:12 AM
    4 Nov, 15 2003 11:22 PM
    5 Dec, 20 2005 09:12 AM
    6 Oct, 30 2006 10:00 AM
    7 Jan, 10 2006 12:32 PM
    8 Apr, 11 2005 07:10 AM
    9 May, 13 2006 03:12 AM
    10 Sep, 15 2003 11:22 PM
    11 Oct, 20 2005 09:12 AM
    12 Dec, 30 2006 10:00 AM
    12 rows selected.
    SQL> select id, ddate, dtime,
    2 to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM') AA,
    A,
    3 to_char(to_date(ddate||dtime,'Mon, DD YYYYHH:MI
    MI PM'),'Mon, DD YYYYHH:MI PM') BB
    4 from test;
    ID DDATE DTIME
    DTIME AA BB
    1 Jan, 10 2006 12:32 PM
    12:32 PM 10-JAN-06 Jan, 10 200612:32 PM
    2 Mar, 11 2005 07:10 AM
    07:10 AM 11-MAR-05 Mar, 11 200507:10 AM
    3 Apr, 13 2006 03:12 AM
    03:12 AM 13-APR-06 Apr, 13 200603:12 AM
    4 Nov, 15 2003 11:22 PM
    11:22 PM 15-NOV-03 Nov, 15 200311:22 PM
    5 Dec, 20 2005 09:12 AM
    09:12 AM 20-DEC-05 Dec, 20 200509:12 AM
    6 Oct, 30 2006 10:00 AM
    10:00 AM 30-OCT-06 Oct, 30 200610:00 AM
    7 Jan, 10 2006 12:32 PM
    12:32 PM 10-JAN-06 Jan, 10 200612:32 PM
    8 Apr, 11 2005 07:10 AM
    07:10 AM 11-APR-05 Apr, 11 200507:10 AM
    9 May, 13 2006 03:12 AM
    03:12 AM 13-MAY-06 May, 13 200603:12 AM
    10 Sep, 15 2003 11:22 PM
    11:22 PM 15-SEP-03 Sep, 15 200311:22 PM
    11 Oct, 20 2005 09:12 AM
    09:12 AM 20-OCT-05 Oct, 20 200509:12 AM
    12 Dec, 30 2006 10:00 AM
    10:00 AM 30-DEC-06 Dec, 30 200610:00 AM
    12 rows selected.
    SQL> select id, ddate, dtime,
    to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM')
    2 from test
    3 where id > 3
    4 and to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM')
    ') <= trunc(sysdate);
    ID DDATE DTIME
    DTIME TO_DATE(D
    4 Nov, 15 2003 11:22 PM
    11:22 PM 15-NOV-03
    5 Dec, 20 2005 09:12 AM
    09:12 AM 20-DEC-05
    7 Jan, 10 2006 12:32 PM
    12:32 PM 10-JAN-06
    8 Apr, 11 2005 07:10 AM
    07:10 AM 11-APR-05
    10 Sep, 15 2003 11:22 PM
    11:22 PM 15-SEP-03
    11 Oct, 20 2005 09:12 AM
    09:12 AM 20-OCT-05
    6 rows selected.
    SQL> select id, ddate, dtime,
    to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM')
    2 from test
    3 where id > 3
    4 and to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM')
    ') <= sysdate;
    ID DDATE DTIME
    DTIME TO_DATE(D
    4 Nov, 15 2003 11:22 PM
    11:22 PM 15-NOV-03
    5 Dec, 20 2005 09:12 AM
    09:12 AM 20-DEC-05
    7 Jan, 10 2006 12:32 PM
    12:32 PM 10-JAN-06
    8 Apr, 11 2005 07:10 AM
    07:10 AM 11-APR-05
    10 Sep, 15 2003 11:22 PM
    11:22 PM 15-SEP-03
    11 Oct, 20 2005 09:12 AM
    09:12 AM 20-OCT-05
    6 rows selected.
    -SriSorry Sri, but I fail to see what you mean. How is what you're doing any different than what I'm doing?

  • Slow split table export (R3load and WHERE clause)

    For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
    The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
    However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
    However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
    Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
    I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
    CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
    AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
    CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
    AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
          0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
      |
      ---      1 RETURN
          |
          ---      2 FETCH CE1OC01
              |
              ------   3 IXSCAN CE1OC01~4 #key columns:  2
    query execution time [millisec]            |       333
    uow elapsed time [microsec]                |   429,907
    total user CPU time [microsec]             |         0
    total system cpu time [microsec]           |         0
    Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
    I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
    Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
    Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
    We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
    Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
    package     time      start date        end date          size MB  MB/min
    CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
    CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
    CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
    CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
    CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
    CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
    CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
    CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
    CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
    CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
    CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
    CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
    CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
    CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
    CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
    CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
    CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
    CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
    CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
    CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

    Hi all, thanks for the quick and extremely helpful answers.
    Beck,
    Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
    Ralph,
    The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
    Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
    We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
    Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
    Look at the throughput difference between the previous run and the current run:
    package     time       start date        end date          size MB  MB/min
    CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
    CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
    CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
    CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
    CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
    CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
    CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
    CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
    CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
    CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
    CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
    CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
    CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
    CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
    CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
    CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
    CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
    CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
    CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
    PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
    BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
    CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
                558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
    package     time      start date        end date          size MB   MB/min
    CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
    CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
    CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
    CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
    CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
    CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
    CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
    CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
    CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
    CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
    PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
                90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

  • Error : The ORDER BY clause is invalid in views, inline functions, derived

    Hi All,
    I am on 11g 6.2, Windows Server 2008, my db SQL server 2008, I am facing the error for the reports in which I am trying to edit one the column formula and do something like 'abc/sum(abc)*100'.
    10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 16001] ODBC error state: 37000 code: 8180 message: [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared.. [nQSError: 16001] ODBC error state: 37000 code: 1033 message: [Microsoft][ODBC SQL Server Driver][SQL Server]The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified.. [nQSError: 16002] Cannot obtain number of columns for the query result. (HY000)
    One of the solutions to this which I have found is to edit the EXPRESSION_IN_ORDERBY_SUPPORTED feature in the db properties.
    I want to know what does EXPRESSION_IN_ORDERBY_SUPPORTED means?
    When I create a calculations in 11g like abc/sum(abc) in the column formula for a column then i get this error.
    What does this error mean? Does OBIEE 11g doesn't support using these expressions in the report and the fact that it applies the order by clause to the reports, the report fail?
    Could anybody please explain the issue. There is very limited information on this over the web.
    Thanks in advance.
    Ronny

    Thanks svee for the quick response, actually i had resolved the issue by unchecking the EXPRESSION_IN_ORDERBY_SUPPORTED option in the database. I want to understand how does that makes the difference?
    What does EXPRESSION_IN_ORDERBY_SUPPORTED mean? Does it mean that if I give any expression in my answers report and since obiee uses a order by for all the queries, the expression won't be supported?
    Please explain.

Maybe you are looking for

  • CS6 wont update error code U44M1P7

    Trying to update CS6 via Creative Cloud and ill i get is 'failed' and then the error code U44M1P7. Could anyone shed some light on why tis might be and a way around it please? Thanks

  • Global Trade Services for Logistics/Shipping companies

    Hi, I am researching on functionality of SAP GTS and its relevance to operations of a Logistics Service Provider / Shipping Company. 1) How relevant will this product be to a Logistics/Shipping company. 2) Would like to know if anyone has implemented

  • Loyalty discount disappeared in the past two weeks?

    Hello, My contract is coming to an end and I started looking into upgrading our phones a couple of weeks ago. Since my contract was initiated before Verizon decided to do away with the loyalty discount I still had the $50 loyality discount to use for

  • 64 bit drivers for pci5

    I have a pci52 card that still works great....as long as I'm using 32 bit windows or linux. The card actually still works in 64 bit linux, but not windows 64. Is there any way to get the open source 64 bit drivers working in windows, or is there any

  • GSS-Concept Misunderstanding

    GSS is used to load balancing/redundancy method that can be used if you have two different sites and two public ip blocks.  However, I'm not quite understanding how GSS accomplishes this based on a couple of issues. Even with GSS, arent we still rely