Performance wise data representation

Hello there,
I would like to start discussion regarding performance wise solution implementation solution,
From performance point of view, which choice would be better to be chosen among the following scenarios,
1. one is to run a process run on every and each record in the a Huge table, for example with million records, for all the records,
2. or to have another table which might have about 800 fields !! which might represents a map crossing the id values in the main Huge table, which must be filled based on the values in the main table!! as a pre-processing
then instead of processing the Huge table, simple query the map table which could be indexed using the needed values,
what is the order of causes of bad performance, a process on many records, or many records that represents a pre-processing
many thanks

Thank you Billy for replying,
Billy  Verreynne  wrote:
Nor is performance something that one looks at after the design is done and the code written. Performance is a primary factor that needs to be considered with the h/w bought, the s/w installed and configured, the design, and every single line of code written.yes, currently I am in the design phase so that I am trying to understand the major performance principles that might effect the software when dealing with huge amount of data , whether pre-processing would be better, and such implementation issues.
Here is the case logically:
The process I have mentioned in the post, corresponds to a procedure that must be applied on data from a table, then return a certain value, calculated on the fly.
some of those processing might not needed to be done , so in order to avoid huge unnecessary operation I need to perform some kind of predicating or indexing based on certain values.
what is the best practice for such scenarios, performance wise!
Thanks

Similar Messages

  • Performance Analyzer data representation

    How can I make Performance Analyzer display my program's functions ordered by the time spent inside? E.g. the ordering metric should be "Total LWP time, which is the real time, summed across threads".

    How can I make Performance Analyzer display my
    program's functions ordered by the time spent inside?
    E.g. the ordering metric should be "Total LWP time,
    which is the real time, summed across threads".It depends on what you mean by the time spent inside. If you want all the time including time spent in the function calls then use inclusive user cpu time. If you mean just the time spent in a function when it is at the bottom of the callstack then use exclusive user cpu time. Click on the top of the column you wish to sort by.

  • Difference between Temp table and Variable table and which one is better performance wise?

    Hello,
    Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
    Which one is recommended to use for better performance?
    also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
    In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
    Is that Table variable using Memory or Disk space?
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Check following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
    TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
    Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
    On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
    But it also depends upon specific scenarios you are dealing with , can you share it?
    ~manoj | email: http://scr.im/m22g
    http://sqlwithmanoj.wordpress.com
    MCCA 2011 | My FB Page

  • Processing in 2 internal tables -Performance wise better option

    Hi Experts,
    I have 2 internal tables.
    ITAB1 and ITAB2  both are sorted by PSPHI.
    ITAB1 has PSPHI  some more fields INVOICE DATE  and AMT
    ITAB2 has PSPHI  some more fields amount.
    Both itab1 and itab2 will always have same amount of data.
    I need to filter data from ITAB2 based invoice date given on selection screen.since ITAB2 doesnt have invoice date field.
    i am doing further processing to filter the records.
    I have thought of below processing logic and wanted to know if there is a better option performance wise?
    loop at ITAB1 into wa where invoice_date > selection screen date. (table which has invoice date)
    lv_index = sy-tabix.
    read table itab2 where psphi = wa-psphi and index = lv_index.
    if sy-subrc = 0.
    delete itab2 index lv_index.
    endif.
    endloop.

    Hi Madhu,
    My Requirement is as below could you please advice on this ?
    ITAB1
    Field   1 PSPHI ,    FIELD 2 INVOICE,  FIELD 3 INVOICE_DATE , FIELD4 AMT
                 15245,                       INV1,                           02/2011  ,  400
                  15245                       INV2                            02/2012  ,  430
    ITAB2
       Field   1 PSPHI ,    FIELD 2 PSNR,      FIELD 3 MATNR  , FIELD4 AMT
                 15245,                       PSNR1,                   X .          430
                  15245                       IPSNR2                    Y,          400
    When user enteres date on sel screen as 02/2011
    I want to delete the data from itab1 and itab2 for invoice date greater then 02/2011/
    If i delere ITAB1 for date > selection screen date.
    Loop itab1.
    delete itab2 where psphi in itab1 will delete both rows in above example because the field psphi which is common can be mutiple.
    endloop.
    Can you advice ?

  • Performance wise, a select statement is faster on a view or on a table??

    Performance wise, a complex (with multi join) select statement is faster on a view or on a table??

    Hi,
    the purpose of a view is not to provide performance benefits, it's basically a way to better structure database code and data access. A view is nothing but a stored query. When the optimizer sees references to a view in a query, it tries to merge it (i.e. replace the view with its definition), but it some cases it may be unable to do so (in presence of analytic functions, rownum pseudocolumn etc.) -- in such cases views can lead to a performance degradation.
    If you are interested in performance, what you need is a materialized view, which is basically a table built from a query, but then you need to decide how you would refresh it. Please refer to the documentation for details.
    Best regards,
    Nikolay

  • Package - performance-wise is it correct?

    Hi All
    I have created a package which runs as a concurrent programme to populate 9 tables. The package includes a separate procedures to populate each of the tables as below. I would like to know whether the below method is recommended performance-wise or is there are any other better approach to achieve this?
    Thanks in advance
    regards
    anna
    procedure populate_table1
    begin
    for my_cursor_emp in crs_emp
    loop
    insert into employees
    (emp_no
    ,first_name
    ,last_name
    values
    my_cursor_emp.emp_no
    ,my_cursor_emp.first_name
    ,my_cursor_emp.lastt_name
    end loop;
    end populate_table1
    Lot more columns are there in the above procedure. Package continues as
    procedure 2
    procedure 3
    ...

    Annas wrote:
    Hi All
    I have created a package which runs as a concurrent programme to populate 9 tables. The package includes a separate procedures to populate each of the tables as below. I would like to know whether the below method is recommended performance-wise or is there are any other better approach to achieve this?The recommended approach would be to get rid of the cursor loops.
    INSERT INTO source_table
    select <columns>
    from YOUR_QUERY;This assumes you actually NEED to populate 9 tables like you say, i find that suspect in and of itself. Can you explain the end goal here? Are you populating temporary tables, doing a data migration, something else?

  • Which is Performance wise better MOVE or MOVE CORRESPONDING ...

    Hi SAP-ABAP Experts .
    Which is Performance wise better ...
    MOVE or MOVE CORRESPONDING .
    Regards : Rajneesh

    > A 
    >
    > * access path and indexes
    Indexes and hence access paths are defined when you design the data model. They are part of the model design.
    > * too large numbers of records or executions
    consider a datawarehouse environment - you have to deal with huge loads of data. a Million records are considered "small" here. Terms like "small" or "large"  are depending on the context you are working in.
    If you never heard of Star transformation, Partitioning and Parallel Query you will get lost here!
    OLTP is different: you have short transactions, but a huge number of concurrent users.
    You would not even consider Bitmap indexes in an OLTP environment - but maybe a design that evenly distributes data blocks over several files for avoiding hot spots on heavily used tables.
    > * processing of internal tables => no quadratic coding
    >
    > these are the main performance issues!
    >
    > > Performance is defined at design time
    > partly yes, but more is determined during runtime, you must check everything at least once. Many things can go wrong and will go wrong.
    sorry, it's all about the data model design - sure you have to tune later in the development but you really can't tune successfully on a  BAD data model ... you have to redesign.
    If the model is good there is a chance the developers chooses the worst access to it , but then you have the potential to tune with success because your model allows for a better access strategy.
    The decisions you make in the design phase are detemining the potential for tuning later.
    >
    > * database does not what you expect
    I call this the black box view: The developer is not interested in the underlying database.
    Why we have different DB vendors if they would all behave the same way? I.e. compare concurrency
    and consistency implementations in various DB's - totally different.  You can't simply apply your working knowledge from one database to another DB product. I learned the hard way while implementing on INFORMIX and ORACLE...

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

  • Extracting Movement type wise data(MSEG) based on posting date from 0IC_C03

    Hi Experts,
    I have implemented 0IC_C03 in devlopment client and preparing a Physical Inventory report.
    I want to know if I can directly extract the fugures such as Process Loss, Sales, Sales Return, Transfer In and transfer out from the cube 0IC_C03 so, my question is ...
    Is it possible to extract the movement type wise data as in MSEG from any KF of 0IC_C03 ?? If its possible which particular KF should I use from the cube ??
    Please respond at the earliest.
    Thanks,
    Romil

    Hi,
    You  can extract 0RECTOTSTCK & 0ISSTOTSTCK from 0IC_C03 but you need put corresponding Movment type for each
    you can't put 0TOTALSTCK, it is Non-cumulative key fig,
    even all key fig except Non-Cumulative key Fig like (0TOTALSTCK ,   0VALSTCKQTY  ,  0VALSTCKVAL.....) can be extracted
    Best Regards
    Obaid

  • Personnel No. wise data in GL A/c (FI-HR)

    Dear All,
    My client is working on ECC 6.0 & implemented New GL.
    The requirement is to have Personnel No. wise data in particular GL A/c (GL A/c : Balance Salary Recovery A/c) (This is a Balance-Sheet A/c)
    In table V_T52EK, the Account Assignment Type for the above Symbolic A/c is FC (Posting to Balance Sheet Account (NewGL).  But this is not showing the personnel no. wise / employee wise data. 
    My questions are:
    1) Whether I should change AA Type 'Q' Posting to bal.sheet acc. with pers.no. to get the Personel No. wise data (specially when there is a separate AA Type 'FC' for New GL Balance Sheet A/c)?
    2) If yes, what would be the effect on previos Balance-sheets (upto Oct'10)?
    3) Or is there any other standard customization for this?
    Thanks & Regards,
    Reshma

    Resolved.
    Thanks.

  • How to get hour wise data in report?

    Hi gurus,
                  I have one requirement in front end like retrieving data in hour wise suppose like 12.30pm - 4.30pm transaction sales data into the report since i have option to get only up to day wise data. Please do give me the relevant answers where do i need to do change?.
    Thanks
    Bharath

    Hi Saravana
    I am sure you must be getting the values in tables of table parameters from every FM.
    consolidate the values from tables of all FMs in one table and built ALV for that table only.
    I hope this way you can show the actual data in ALV.
    thanks
    Lalit

  • In RSRT - Is it possible to check request wise data in RSRT only.

    Hi,
    In RSRT - Is it possible to check request wise data in RSRT only.
    Kindly advise me on the same.
    Thanks
    Bujji

    Saveen,
    Here is my problem.
    I have a infocube in which Material No and Base Unit are there.
    I have a DSO in which Material, (Alternate unit, Numerator and Denominator) which are associated to that material are there .
    I need to match the same material number from both infocube and DSO and load the associated Alternate unit, Numerator and denominator in the infocube.
    Since the infocube is non cumulative, I am not able to build Infoset.
    So I added the Infoobjects of (Alternate unit, Numerator and Denominator) to the cube.
    Now the cube has Material no, base unit-------For Both data is filled.
             and extra Alt unit, Numerator and Denominator- For these data empty.
    I need to load the alt unit Numerator and denominator from the DSO for which the Material no matches with the Infocube.
    I am not very good in explanation.Hope u understand. Pls adjust with the long text...
    Pls help me.
    Thanks.
    Guru

  • Is it possible to perform network data encryption between Oracle 11g databases without the advance security option?

    Is it possible to perform network data encryption between Oracle 11g databases without the advance security option?
    We are not licensed for the Oracle Advanced Security Option and I have been tasked to use Oracle Network Data Encryption in order to encryption network traffic between Oracle instances that reside on remote servers. From what I have read and my prior understanding this is not possible without ASO. Can someone confirm or disprove my research, thanks.

    Hi, Srini Chavali-Oracle
    As for http://www.oracle.com/technetwork/database/options/advanced-security/advanced-security-ds-12c-1898873.pdf?ssSourceSiteId… ASO is mentioned as TDE and Redacting Sensitive Data to Display. Network encryption is excluded.
    As for Network Encryption - Oracle FAQ (of course this is not Oracle official) "Since June 2013, Net Encryption is now licensed with Oracle Enterprise Edition and doesn't require Oracle Advanced Security Option." Could you clarify this? Thanks.

  • Year Wise Data in SQL Express

    Hi...
    Can any one help for How to maintain year wise data in sql server. 
    In my project i manage year wise data i.e 1st April to 31 march.
    At the end of the year the current year closing balances need to update as opening balance for the next year and previous data wont be changed.
    And i dont know how to  create year wise folders in one database....
    Please show me some suggestion...

    Hi
    According to your description, we got that you want to manage year wise data in one database. You can try to create a partitioned table with a Year column to store all the data. The partitioning feature is supported in SQL Server 2005 and later versions of
    SQL Server. There are four major steps for implementing partitioning.
    1. Create a filegroup or filegroups and corresponding files that will hold the partitions specified by the partition scheme.
    2. Create a partition function that maps the rows of a table into partitions based on the values of a specified column.
    3. Create a partition scheme that maps the partitions of a partitioned table to the new filegroups.
    4. Create or modify a table and specify the partition scheme as the storage location.
    For how to create a partitioned table, please review the following links:
    Create Partitioned Tables and Indexes:http://msdn.microsoft.com/en-us/library/ms188730.aspx
    Creating a table with horizontal partitioning in SQL Server:
    http://www.mssqltips.com/sqlservertip/1796/creating-a-table-with-horizontal-partitioning-in-sql-server/
    Thanks
    Lydia Zhang

  • Best practices for data representation

    I'm curious about the best data representation for a constant or variable when there is an obvious choice of two.
    For example, take the Timeout terminal of the Event structure. This terminal takes a Long (I32) data type, but I'm wiring to it a constant value of 100 and therefore could use an Unsigned Byte (U8). Setting the constant to be I32 prevents an automatic conversion step from happening, but setting it to be U8 saves a little bit of unnecessary allocated space.
    Which is better?

    Practically
    speaking it more than likely will not matter until the data sets get
    large however as a "best practices" go it is best to keep the data
    consistent and in the type that the control, property node etc expects. Directly from the NI user manual (LV 7.1)
    "Coercion
    dots appear on block diagram nodes to alert you that you wired two
    different numeric data types together. The dot means that LabVIEW
    converted the value passed into the node to a different representation.
    Coercion dots can cause a VI to use more memory and increase its run
    time. Try to keep data types consistent in VIs."
    Cheers,
    --Russ

Maybe you are looking for