Different number of result

hi all...i wanna ask about my script..
there is 2 query,
query1 returns 76rows and query 2 returns 71rows..
it means not similar each other?
if i use query 1, it take longer time than query 2,
maybe because of sub query..
any suggestion to change query2 ?? because query 2 only takes 0.125 second
and query1 takes 9seconds.. and query 2 there is eror ORA00600 eror internal code..
thanks b4
select A.PRODUCTION_DAY,
A.op_sub_productionunit_code,
A.op_area_code,
substr(A.op_fcty_1_code,1,11),
substr(A.op_fcty_1_code,-5,5)CGS,
theor_net_vol_unadj, theor_net_vol_adj,
gross_vol_wat, alloc_net_vol,
alloc_vol_wat,
A.CORR_FACTOR FROM (SELECT * FROM STRM_PROD_DAILY
WHERE STREAM_PHASE = 'OIL' )A,
(SELECT T.DATE_TIME,s.gross_vol_wat,s.alloc_vol_wat,
d.op_fcty_1_code
FROM WTR_DAYPROD S,
STREAM_DIM D,TIME_DIM T
WHERE  CODE LIKE '%MFL%' AND STREAM_CATEGORY = 'WAT_PROD'AND S.STREAM_KEY = D.STREAM_KEY
AND S.TIME_KEY = T.TIME_KEY  
)B WHERE A.op_sub_productionunit_code = 'SLN'
AND A.PRODUCTION_DAY = '1-JAN-2008'
AND A.OP_FCTY_1_CODE IS NOT NULL
AND B.OP_FCTY_1_CODE(+) = A.OP_FCTY_1_CODE
AND B.DATE_TIME(+) = A.PRODUCTION_DAY----
SELECT P.PRODUCTION_DAY,P.op_sub_productionunit_code, P.op_area_code, substr(P.op_fcty_1_code,1,11),substr(P.op_fcty_1_code,-5,5)CGS,
theor_net_vol_unadj, theor_net_vol_adj, gross_vol_wat, alloc_net_vol, alloc_vol_wat,P.CORR_FACTOR FROM STRM_PROD_DAILY P,WTR_DAYPROD S,STREAM_DIM D,TIME_DIM Twhere P.OP_SUB_PRODUCTIONUNIT_CODE = 'SLN'
and  p.STREAM_PHASE = 'OIL'
AND S.STREAM_KEY = D.STREAM_KEY
AND S.TIME_KEY = T.TIME_KEY
AND  D.OP_FCTY_1_CODE(+) = P.OP_FCTY_1_CODE
AND T.DATE_TIME(+) = P.PRODUCTION_DAY

sorry i didnt understand, first u say "query 2 returns 71rows.." and then again
"query 2 there is eror ORA00600 eror internal code"

Similar Messages

  • Different set of results

    The searches for:
         o ring
    and
         o-ring
    Return a different number of results.
    Should they be the same seeing that "-" is a non searchable character?

    If character "-" is not included in search characters, is that correct that "o-ring" will be treated as a phrase search?
    I found the following in Dev studio help:
    Adding search characters that support automatic phrasing
    Inclusion of original punctuation marks in search query phrases returns more relevant results.
    It says "More" which made me think ... maybe not "phrase". Please explain, thanks.

  • Merging two separate Tables into One - different number of rows

    Hi. I have a problem. Oracle 10.2.0.4.0
    7 years ago when i was learning how to query my teacher told me that is possible to creata some kind of report where results could be combined in 1 view(report) witch in different variable could have different number of rows.
    I just remember that there is needed to use group by function and some join?
    Please help
    In link there is a sample sample view
    I need to combine Table A using D variable with Table B to become Wynik(result)
    Tables create
    CREATE TABLE "TABELA_A" ( A NUMBER,B NUMBER,C NUMBER,D NUMBER ) ;
    INSERT INTO "TABELA_A" (A, B, C, D) VALUES (123, 1, 70, 999)
    INSERT INTO "TABELA_A" (A, B, C, D) VALUES (123, 2, 80, 999)
    INSERT INTO "TABELA_A" (A, B, C, D) VALUES (234, 1, 100, 111)
    INSERT INTO "TABELA_A" (A, B, C, D) VALUES (456, 1, 10, 222)
    CREATE TABLE "TABELA_B" ( D NUMBER,E VARCHAR2(255),F NUMBER ) ;
    INSERT INTO "TABELA_B" (D, E, F) VALUES (999, 'A', 1);
    INSERT INTO "TABELA_B" (D, E, F) VALUES (999, 'B', 1);
    INSERT INTO "TABELA_B" (D, E, F) VALUES (999, 'B', 3);
    INSERT INTO "TABELA_B" (D, E, F) VALUES (999, 'C', 1);
    INSERT INTO "TABELA_B" (D, E, F) VALUES (111, 'A', 1);
    INSERT INTO "TABELA_B" (D, E, F) VALUES (111, 'C', 2);
    And to become result - in picture
    [http://i303.photobucket.com/albums/nn153/katanbutcher/pytanko.jpg?t=1306152636]
    Thank's for help
    Edited by: 860710 on 2011-05-23 05:42
    Edited by: 860710 on 2011-05-23 05:54
    Edited by: 860710 on 2011-05-23 06:07

    Maybe if you follow the instructions mention in this post you may get some help. For example, I wouldn't try to click on the link provided by you.
    SQL and PL/SQL FAQ
    Regards
    Raj

  • The source and target structure have a different number of fields

    Hi,
    I am new to workflow and I am trying to create an attachment in Workflow (SAP ECC 6.0) and pass it through to a User Decision (User Decision works fine) however the workflow is failing on the attachment step with u2018The source and target structure have a different number of fieldsu2019. The bindings check ok. Please see details below.
    I have used document u2018Creating Attachments to Work Items or to User Decisions in Workflowsu2019 by Ramakanth Reddy for guidance. Thanks in advance.
    1) Workflow containers (SWDD)
    WORKITEMID (import)
    ZSWR_ATT_ID (export)
    SOFM (export)
    2) Task Container (PFTC)
    1 Import parameter defined u2013 WORKITEMID (swr_struct-workitemid)
    2 Export parameters defined
    - SOFM (Ref. obj. type SOFM)
    - ZSWR_ATT_ID  (swr_att_id-doc_id)
    Binding task -> Method
    Binding for 1 parameter (import) defined
    Task <- Method
    Binding for 2 parameters (export) defined
    3) Z  BOR object created with a Method, Method Parameters and Event (SWO1)
    1 import parameter defined
    2 export parameters defined
    Method calls FM SSF_FUNCTION_MODULE_NAME, CONVERT_OTF, SCMS_XSTRING_TO_BINARY and SAP_WAPI_ATTACHMENT_ADD
    Workflow is triggered by FM SAP_WAPI_CREATE_EVENT, Return_code = 0
    Event_id = 00000000000000000001
    Test results
    A) Triggered by ABAP/ FM SAP_WAPI_CREATE_EVENT - SWI2_DIAG results
    Work item  14791: object <z bor object name> method <method name> cannot be executed. The source and target structure have a different number of fields (this message is repeated 3 times). Error handling for work item 14791. No errors occurred -> details in long text (message is repeated 3 times).
    Message no. WL821, OL383, WL050 in long text
    B) Z BOR Test method <execute>
    Enter workitem id.
    Runtime error - Data objects in Unicode programs cannot be converted. The statement "MOVE src TO dst" requires that the operands "dst" and "src" are convertible. Since this statement is in a Unicode program, the special conversion rules for Unicode programs apply.                                        
    In this case, these rules were violated.   
    Program                             SAPLSWCD                
    Include                                LSWCDF00                
    Row                                    475                     
    Module type                        (FORM)                  
    Module Name                      MOVE_CONTAINER_TO_VALUE           
    C) Z BOR Test method <debugging>
    Enter workitem id.
    SAP_WAPI_ATTACHMENT_ADD, return_code = 0, message_lines  = Attachment created            
    both  swc_set_element container work ok
    Runtime error occurs after end_method executed. Data objects in Unicode programs cannot be converted.
    D) Workflow test
    Enter workitem id <execute>
    Task started> Workflow log> Status = Error
    Workflow errors in Attachment step (however Office document can be viewed in details for step).

    Problem has now been resolved. Problem was related to use of swr_att_id structure and swc_set_element statement in BOR program - problem resolved by only setting w/f container to swr_att_id-doc_id.

  • Different Risk Analysis Results with the same user from 2 different RAR

    Hi..
    I've loaded the same Risks, Rules, etc, into 2 GRC RAR environments (Sandbox and Quality systems); both of them are connected with the same SAP ECC system. But when I do a User Risk analysis (authorization level), the result from Sandbox is different from Quality system. I donu2019t have users or roles mitigated yet, users are synchronized, rules are exactly the same and I donu2019t know what happen??... Please, help me.
    Thanks...

    Hi...
    If I do a Full Sync of users to the same ECC system from both RAR boxes, I got different number of users loaded (i.e. 18757 vs. 18141), similar case with the full sync of roles. (13100 vs.  13150).
    If I load exactly the same set of functions to both RAR systems and I generate the rules, I got the same problem, different number of rules is generated.
    I've verified both RAR configuration and they are the same (excluded users, roles mitigated, etc.)
    Is it a normal behavior? What could be wrong?
    Thanks in advance!!

  • Missing large number of results through Bing Search API (web results only)

    When making multiple calls to the Bing Web Search API (with a different $skip parameter), many queries I try seem to be missing many of the result I'd expect.
    For example, searching for the string 'obama' on bing.com shows 107,000,000 results available.
    When I search using the web search API using:
    https://api.datamarket.azure.com/Bing/SearchWeb/v1/Web?Query=%27obama%27&%24format=json
    I get 50 results, and the '__next' parameter is given as 'https://api.datamarket.azure.com/Data.ashx/Bing/SearchWeb/v1/Web?Query='obama'&$skip=50'
    If I repeat this several times, eventually I get a response with less than 50 results, and no '__next' parameter, indicating there are no more results.
    However, I always get far fewer than 1000 results (I'd expect there to be at least 1000). Trying to get 1000 results (by making a request and querying against the '__next' URL), I get different numbers of results each time:
    attempt 1: 355 results
    attempt 2: 441 results
    attempt 3: 358 results
    attempt 4: 692 results
    attempt 5: 692 results
    attempt 6: 694 results
    attempt 7: 659 results
    Querying for this should always return at least 1000 results, since 'obama' has 107,000,000 results listed when searching from bing.com
    Any idea what's going on here?

    Sorry to respond to this old thread, but the problem persists. It exists in both the web UI and the API. The initial result page (on the web) or result object (in the API) report millions of search results, however after clicking through a number of result
    pages (on the web) the total number is reduced to a few hundred. Similarly, in the API, setting the '$skip' parameter above this number does not return results. In the Obama case the first page shows 18.2 million results (http://www.bing.com/search?q=obama&go=Submit+Query&qs=bs&form=QBRE)
    but from page 35 and over only 529 results are reported (e.g., http://www.bing.com/search?q=obama&qs=n&pq=obama&sc=8-3&sp=-1&sk=&ghc=1&cvid=92729d6076e24a37a9e6ee099da99a4a&first=527&FORM=PERE7). Therefore the above problem
    does not seem to be related to the difference between the API and the web UI, but rather that Bing does not provide any results from a certain point (presumably because nobody is interested in them anyway). However, for data mining/web content analysis it
    is desired to get all results, even uninteresting ones. Is this behaviour documented somewhere, or can it be influenced?

  • Filr 1.1 LDAP Preview Not Returning Correct Number of Result

    I'm finishing set up of Filr 1.1 in our environment but noticed today that the LDAP preview does not return the correct number of results. The query:
    (&(objectClass=Person)(|(employeeType=E)(employeeT ype=Y)(employeeType=Z)))
    has been tried against multiple AD domain controllers and has so far returned 3 different user counts - 2895, 2800, and 2700. Mostly it always returns 2800. The correct count using powershell with the exact same ldap filter is over 5000.
    I would prefer not to try a sync until I have some confidence that it will complete successfully. Any suggestions?

    On Wed, 22 Apr 2015 17:26:03 GMT, jameswatson3
    <[email protected]> wrote:
    >
    > I'm finishing set up of Filr 1.1 in our environment but noticed today
    > that the LDAP preview does not return the correct number of results. The
    > query:
    >
    > (&(objectClass=Person)(|(employeeType=E)(employeeT ype=Y)(employeeType=Z)))
    >
    > has been tried against multiple AD domain controllers and has so far
    > returned 3 different user counts - 2895, 2800, and 2700. Mostly it
    > always returns 2800. The correct count using powershell with the exact
    > same ldap filter is over 5000.
    >
    > I would prefer not to try a sync until I have some confidence that it
    > will complete successfully. Any suggestions?
    Could you try it with filr 1.2?
    https://download.novell.com/Download...d=q-mgVFDsOKQ~

  • 2 different number formats

    Post Author: ezworld
    CA Forum: Xcelsius and Live Office
    Hi all, I need help again
    I have a List box & a Line chart, the List box has approx 10 separate metrics which are in different number formats(numeric & percentage).  The problem is I cannot figure out how to set the number format for each selection in the List box.  So I keep the number format as numeric and the result is my percentages end up as fractions, I have been searching however I cannot find a tip or trick which addresses this issue, can anyone help?
    Xcelsius Engage 2008

    Post Author: David Lopez
    CA Forum: Xcelsius and Live Office
    As far as I can see, the Appearance / Text / Labels property only allows for one numeric format at any given time.  So in this case, the user would have to either select a numeric or a percentage format.
    I'm not sure which Insertion type that you wanted to use, but would the Spreadsheet Table component be a possible alternative for you?  You can insert by either Position or Rows with this component.

  • Hi there. I have a problem with sound on my 4s. When you move the volume slider up, it sounds well. But when I move the volume slider down I will hear barely and unclear sound in my headphone.I tried different headphones but result is same as old one.Help

    Hi there. I have a problem with sound on my 4s. When you move the volume slider up, it sounds well. But when I move the volume slider down I will hear barely and unclear sound in my headphone.I tried different headphones but result is same as old one.Help

    Try A and B
    (A) Restart iPad
    1. Hold down the Sleep/Wake button until the red slider appears.
    2. Drag the slider to turn off iPad.
    3. Turn iPad back on, hold down the Sleep/Wake until the Apple logo appears
    (B) Reset iPad
    Hold down the Sleep/Wake button and the Home button at the same time for at least ten seconds, until the Apple logo appears
    Note: Data will not be affected.

  • Different number of rows for different columns in JTable

    hi
    I need to create a JTable with different number of rows for different columns...
    Also the rowheight should be different in each column...
    say there is a JTable with 2 columns... Col1 having 5 rows and column 2 having 2 rows...
    The rowHeight in Col2 should be an integer multiple of Rowheight in Col1
    how do I do this ??
    can anybody send me some sample code ?????
    thanx in advance

    How about nesting JTables with 1 row and many columns in a JTable with 1 column and many rows.
    Or you could leave the extra columns null/blank.
    You could use a GridBagLayout and put a panel in each group of cells and not use JTable at all.
    It would help if you were more specific about how you wanted it to appear and behave.

  • How to get total number of result count for particular key on cluster

    Hi-
    My application requirement is client side require only limited number of data for 'Search Key' form total records found in cluster. Also i need 'total number of result count' for that key present on the custer.
    To get subset of record i'm using IndexAwarefilter and returning only limited set each individual node. though i get total number of records present on the individual node, it is not possible to return this count to client form IndexAwarefilter (filter return only Binary set).
    Is there anyway i can get this number (total result size) on client side without returning whole chunk of data?
    Thanks in advance.
    Prashant

    user11100190 wrote:
    Hi,
    Thanks for suggesting a soultion, it works well.
    But apart from the count (cardinality), the client also expects the actual results. In this case, it seems that the filter will be executed twice (once for counting, then once again for generating actual resultset)
    Actually, we need to perform the paging. In order to achieve paging in efficient manner we need that filter returns only the PAGESIZE records and it also returns the total 'count' that meets the criteria.
    If you want to do paging, you can use the LimitFilter class.
    If you want to have paging AND total number of results, then at the moment you have to use two passes if you want to use out-of-the-box features because LimitFilter does not return the total number of results (which by the way may change between two page retrieval).
    What we currently do is, the filter puts the total count in a static variable and but returns only the first N records. The aggregator then clubs these info into a single list and returns to the client. (The List returned by aggregator contains a special entry representing the count).
    This is not really a good idea because if you have more than one user doing this operation then you will have problems storing more than one values in a single static variable and you used a cache service with a thread-pool (thread-count set to larger than one).
    We assume that the aggregator will execute immediately after the filter on the same node, this way aggregator will always read the count set by the filter.
    You can't assume this if you have multiple client threads doing the same kind of filtering operation and you have a thread-pool configured for the cache service.
    Please tell us if our approach will always work, and whether it will be efficient as compared to using Count class which requires executing filter twice.
    No it won't if you used a thread-pool. Also, it might happen that Coherence will execute the filtering and the aggregation from the same client thread multiple times on the same node if some partitions were newly moved to the node which already executed the filtering+aggregation once. I don't know anything which would even prevent this being executed on a separate thread concurrently.
    The following solution may be working, but I can't fully recommend it as it may leak memory depending on how exactly the filtering and aggregation is implemented (if it is possible that a filtering pass is done but the corresponding aggregation is not executed on the node because of some partitions moved away).
    At sending the cache.aggregate(Filter, EntryAggregator) call you should specify a unique key for each such filtering operation to both the filter and the aggregator.
    On the storage node you should have a static HashMap.
    The filter should do the following two steps while being synchronized on the HashMap.
    1. Ensure that a ConcurrentLinkedQueue object exists in a HashMap keyed by that unique key, and
    2. Enqueue the total number count you want to pass to the aggregator into that queue.
    The parallel aggregator should do the following two steps while being synchronized on the HashMap.
    1. Dequeue a single element from the queue, and return it as a partial total count.
    2. If the queue is now empty, then remove it from the HashMap.
    The parallel aggregator should return the popped number as a partial total count as part of the partial result.
    The client side of the parallel aware aggregator should sum the total counts in the partial result.
    Since the enqueueing and dequeueing may be interleaved from multiple threads, it may be possible that the partial total count returned in a result does not correspond to the data in the partial result, so you should not base anything on that assumption.
    Once again, that approach may leak memory based on how Coherence is internally implemented, so I can't recommend this approach but it may work.
    Another thought is that since returning entire cached values from an aggregation is more expensive than filtering (you have to deserialize and reserialize objects), you may still be better off by running a separate count and filter pass from the client, since for that you may not need to deserialize entries at all, so the cost on the server may be lower.
    Best regards,
    Robert

  • How can i change the phone number on my ipod to a different number

    how can i change my phone number on my ipod to a different number than my i phone

    Only by getting a different iPhone and using your Apple ID for Messages/FaceTime on your iPod. The iPod (and iPad) only have a phone number for Messages and FaceTimeT if the Apple ID used for those services on the iPad is also used on a Phone witrh iOS 6 or later.
    iOS and OS X: Link your phone number and Apple ID for use with FaceTime and iMessage

  • There is an invalid number of result bindings returned for the ResultSetType

    SSIS SQL Task:  Single Row Result Set
    Code was updated to test for data in target:  If Exists Do Merge  ELSE Do Insert
    Previously was just a merge that Output $Action to @ChangeSum and then @ChangeSum queried for updates and inserts
    That all worked but after injecting new code I receive the error There is an invalid number of result bindings returned for the ResultSetType that I don't know what it means or how to troubleshoot.
    Inject New Code:
    IF OBJECT_ID('tempdb..##TblTemp', 'U') IS NOT NULL
    DROP TABLE ##TblTemp
    Declare @sql nvarchar(max);
    set @sql = @TestForData
    exec (@sql);
    IF EXISTS  (select top 1 * from ##TblTemp)
           Begin
    --Beginning of existing code
               begin transaction;
                begin try
                declare @MergeQuery varchar(max)
                set @MergeQuery = convert(varchar(max), @MergeQuery1) +  convert(varchar(max), @MergeQuery2)
                + ' ' + convert(varchar(max), @MergeQuery3)
                + ' ' + convert(varchar(max), @MergeQuery4)
                + ' ' + convert(varchar(max), @MergeQuery5);
                exec(@MergeQuery);
                end try
                begin catch
                    declare
                    @Message VARCHAR(4000)
                    ,@Severity INT
                    ,@State  INT;
                    select
                    @Message = ERROR_MESSAGE()
                    ,@Severity = ERROR_SEVERITY()
                    ,@State = ERROR_STATE();
                    if @@TRANCOUNT > 0
                    rollback transaction;
                    raiserror(@Message, @Severity, @State);
                end catch;
                if @@trancount > 0
                begin
                    commit transaction;
                end
    --End of existing code
          End
    else
           Begin
        declare @InsertQuery nvarchar(max)
        set @InsertQuery = convert(varchar(max),@InsertQuery1)
        exec (@InsertQuery);
          end
    Drop Table ##TblTemp
    =================================
    SSIS Variable @InsertQuery1:
    This variable is executed in SQL Task and the last 3 lines I expect a single row of Insert & Update counts to be returned.
       declare @ChangeSum table(change varchar(25));
       declare @Inserted int = 0;
       declare @Updated int = 0;
    While 1 = 1  
        Begin  
            INSERT INTO [R_Paid].[BusCodeF454x93]
            OUTPUT Inserted.Sta3n INTO @ChangeSum
            SELECT TOP 1000 s.*
            FROM [R_Stage].[BusCodeF454x93] s
            WHERE NOT EXISTS
              SELECT 1
                FROM [R_Paid].[BusCodeF454x93]
                WHERE STA3N=S.STA3N and [BusCodeF454x93IEN] = s.[BusCodeF454x93IEN]
           IF @@ROWCOUNT  = 0 BREAK       
    END
       set @Inserted = (select count(*) from @ChangeSum );
       set @Updated = 0;
     select @Inserted as Inserted, @Updated as Updated;

    "SELECT " + (DT_WSTR,50)@[User::TargetExists] + " = CASE WHEN COUNT(*) > 0 THEN 1 ELSE 0 END
    FROM " + (DT_WSTR, 100) @[User::DataDestinationTable]
    must do the trick
    The T-SQL merge allows both inserts and updates:
    MERGE Target AS T
    USING Source AS S
    ON (T.EmployeeID = S.EmployeeID)
    WHEN NOT MATCHED BY TARGET AND S.EmployeeName LIKE 'S%'
    THEN INSERT(EmployeeID, EmployeeName) VALUES(S.EmployeeID, S.EmployeeName)
    WHEN MATCHED
    THEN UPDATE SET T.EmployeeName = S.EmployeeName
    WHEN NOT MATCHED BY SOURCE AND T.EmployeeName LIKE 'S%'
    THEN DELETE
    OUTPUT $action, inserted.*, deleted.*;
    ROLLBACK TRAN;
    GO
    Arthur My Blog

  • How can i set different number of iterations for different scenario profiles that we add to "AutoPilot" in load testing of OATS

    Hi,
    I have few set of load test scenarios, I would like to add each of these test scenarios to "AutoPilot" and run each different scenario profile at different number of iterations.
    As in Oracle Load Testing the "Set Up AutoPilot" tab I see  a section like this "Iterations played by each user: ", which says run these many iterations for every virtual user of all profile added under "Submitted Profile Scenario". So is there any thing like that to set different iterations for every scenario profile added in Autopilot.
    Thanks in advance

    It's not a built-in feature to override a page's styles on a tab-by-tab or site-by-site basis, but perhaps someone has created an add-on for this?
    It also is possible to create style rules for particular sites and to apply them using either a userContent.css file or the Stylish extension. The Greasemonkey extension allows you to use JavaScript on a site-by-site basis, which provides further opportunity for customization. But these would take time and lots of testing to develop and perfect (and perfection might not be possible)...
    Regarding size, does the zoom feature help solve that part? In case you aren't familiar with the keyboard and mouse shortcuts for quickly changing the zoom level on a page, this article might be useful: [[Font size and zoom - increase the size of web pages]].

  • Different number of records in RSA3 for Full and INIT

    Hi All,
    I am about to load the data from 0FI_GL_4 and checked it in RSA3 for the number of records. It returned different number of records when I run the extract with options 'F' and 'C'. Why is it so? This is the first time I am loading the data into BW for this datasource. I would expect the number of records to be same if I do either a full extract or an init but surprised by seeing different number of records.
    Best Regards,
    James.

    Hi James,
    Probably it's because of the time stamps stored in table BWOM2_TIMEST.
    If you run a delta init and than a delta update then the number of the records should match with the record of the full update.
    You can find some more info about table BWOM2_TIMEST here:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/a7f2f294-0501-0010-11bb-80e0d67c3e4a#527,7,Timestamp%20Mechanism:%20New%20documents
    Bye,
         Zsolt

Maybe you are looking for