Date field performance

hi
The query below works well giving me a few 1000 records in 10-14 secs.
SELECT p1.A AS "Service Number", p.1B AS "Status",
C AS "Suspended Date",
D AS "Reason for suspension"
FROM inst_prod p1, reason p2
WHERE p1.b = 'SUS'
AND p1.reaon_id = p2.reason_cd
But the same query takes 8-10 mins if i add another date condition to it.
SELECT p1.A AS "Service Number", p.1B AS "Status",
C AS "Suspended Date",
D AS "Reason for suspension"
FROM inst_prod p1, reason p2
WHERE p1.b = 'SUS'
AND p1.reaon_id = p2.reason_cd
and trunc(p1.c) between to_date('24-Aug-2006','dd-Mon-yyyy') and
to_date('31-Aug-2006','dd-Mon-yyyy')
Is it because of the TRUNC function ?
Can someone help in fine tuning this query.
Thanks in advance
Vamsi

There is no such issue as "Date field performance". Reason: This implies a consistent and inevitable performance problem when using a date column as oppose to a numeric or varchar column. This is false.
A date column is no slower or no faster than a number or varchar2 column. The speed is determined by what you, the developer or user, does with the contents of that column.
The 1st Step in "Determining The Cause Of A Performance Problem":
Look in the mirror.
The vast majority of performance issues are caused by us, because we are ignorant and doing the wrong thing, misunderstand some basic concept, or worse of all, think we know better and purposefully ignore fundamentals.
Whenever I hit a problem in software engineering, I always blame myself first, before blaming the software. Something like, "you screwed up again Verreynne!". ;-)
It is easy to fall in the trap of "oh, it is not my code, it is because of how the software (Oracle) does this or that (cause performance problems with date fields)".
Don't.
> Is it because of the TRUNC function ?
Yes, that could be a factor. Let's say you have an index on COLUMN_FOO. Data type not important.
So, great, you can write a SQL like this and Oracle will be able to use the index for optimal perfomance:
SELECT * FROM my_table WHERE column_foo = <some-value>
However, what happens to that index when you write the following?:
SELECT * FROM my_table WHERE FUNCTION(column_foo) = <some-value>
The index on COLUMN_FOO is rendered useless. Why? Because the predicate does not compare COLUMN_FOO with some-value. The predicate states that the return value of FUNCTION() needs to be compared to some-value.
So the index cannot be used. Every single row in MY_TABLE has to be read, every single row's COLUMN_FOO has to be run via FUNCTION() and only then can the comparison to some-value be done.
> Can someone help in fine tuning this query.
As others already stated, you need to look at the execution plans of the two queries and determine what the impact is when you add the TRUNC(date) predicate.

Similar Messages

  • Data fields not filled in DSO Activation

    Dear All,
    I have written an End routine to extract data into a DSO to fill some data fields. The End routine will perform a lookup on another DSO and modifies the records of the result_package. The data fields are characteristics and hence in the transformation, they are set to No Transformation.
    When I extracted the data into DSO, the new table has all the data fields correctly filled. But, after activating the request, all the data fileds data are erased. However, the keyfields data is still available in the Active table.
    Can you please help me on this.
    Thanks,
    Srinivas

    Hi Srinivas
    I have faced the same issue.Data was coming till new table,but after activation fields were getting
    blank.
    There is no necessity for doing a dummy mapping. In the transformation screen you will be having an ikon near to the delete
    routine symbol with an arrow symbol
    similar to select all and unselect all, just clikc on it and then mention the setting as Update all the target fields need to be populated.
    This will solve the issue, if it is not solved, then you can make a direct mapping to any of the field in the source, it need not be the
    exact field in the target.
    Thanks,
    Saveen Kumar

  • Need to Programmatically Set IRR Filter on Date Field Due to APEX 4.1 Bug

    There may be another work around but, here is the problem that we are encountering...
    We have a huge table that is partitioned on a DATE field and an IRR that reports on this table. By default, we want to show the most recent 3 days of data. There is a built-in filter for "is in the last 3 Days." Sounds Great! Unfortunately APEX generates the code using TIMESTAMP rather than DATE functions. As a result of this, the query does not perform partition pruning and, as a consequence, it is doing a full table scan which takes forever. Note the use of the "LOCALTIMESTAMP" function in the query that is generated by APEX for this filter:
    SELECT   "BUSINESS_DATE",
             COUNT ( * ) OVER () AS apxws_row_cnt
      FROM   (SELECT   *
                FROM   (SELECT   *
                          FROM   position_delta_his p) r
               WHERE   ("BUSINESS_DATE" BETWEEN *LOCALTIMESTAMP*
                                                - NUMTOYMINTERVAL (:APXWS_EXPR_1,
                                                                   'year')
                                            AND  *LOCALTIMESTAMP*)) r
    WHERE   ROWNUM <= TO_NUMBER (:APXWS_MAX_ROW_CNT)If, instead, APEX used the SYSDATE function, as the underlying column is a DATE, this returns instantly, after partition pruning.
    SELECT   "BUSINESS_DATE",
             COUNT ( * ) OVER () AS apxws_row_cnt
      FROM   (SELECT   *
                FROM   (SELECT   *
                          FROM   position_delta_his p) r
               WHERE   ("BUSINESS_DATE" BETWEEN *SYSDATE*
                                                - NUMTOYMINTERVAL (:APXWS_EXPR_1,
                                                                   'year')
                                            AND  *SYSDATE*)) r
    WHERE   ROWNUM <= TO_NUMBER (:APXWS_MAX_ROW_CNT)
    The bug is that APEX should base the underlying function on the data type of the filtered column.
    As a work around, if we create a filter where BUSINESS_DATE >= '4/13/2012' (three business days ago), again, this returns instantaneously. The issue is that we can only set this filter by using the APEX GUI. We need to be able to:
    1. Determine the date for 3 business days ago
    2. Set this as the default filter.
    I tried creating a BEFORE HEADER PL/SQL page process but, it does not appear to be having any effect. Here is that code:
    DECLARE
        ldt_Filter DATE;
        CURSOR lcsr_GetMaxBusinessDate IS
            SELECT Max(BUSINESS_DATE)
            FROM POSITION_DELTA_HIS;
        DAYS_AGO CONSTANT NUMBER := 3;       
    BEGIN
        APEX_UTIL.IR_CLEAR( :APP_PAGE_ID );
        OPEN lcsr_GetMaxBusinessDate;
        FETCH lcsr_GetMaxBusinessDate INTO ldt_Filter;
        CLOSE lcsr_GetMaxBusinessDate;
        ldt_Filter := ( Trunc( ldt_Filter ) - 3 );
        APEX_UTIL.IR_FILTER( p_page_id       => :APP_PAGE_ID,
                             p_report_column => 'BUSINESS_DATE',
                             p_operator_abbr =>'GTE',
                             p_filter_value  => TO_CHAR( ldt_Filter, 'YYYYMMDDHH24MISS' ) );
    END;Can anyone tell me:
    1. How to set this filter programmatically (also needs to be displayed on the page so the user can see the current filter...as if it were created via the GUI) ***OR***
    2. Some other work around for this issue..
    Thanks,
    -Joe

    Actually, now that I look further, I don't think it is going to work to simply set the filter programmatically. The end user can still click the column heading where they are only given the choice of the LOCALTIMESTAMP based filters. If they pick one, the page is going to be out to lunch for them.
    We really need some other fix. We really need a way to actually address the underlying issue.
    -Joe

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Date Field in Hierarchical ALV

    Hello Friends,
    I have a few fields in my Hierarchical ALV output which are initially hidden.
    When I choose to display these fields by changing the Layout the date fields come out in the form
    MMDDYYYY
    I need those fields as 
    MM/DD/YYYY
    The date fields which are NOT hidden initially  are in the format MM/DD/YYYY. No problem there.
    When I go to settings and choose column width optimize I am then able to see these date fields as MM/DD/YYYY.
    In My code I already set this col width to optimize. It is also working . But for fields which are hidden initially I have to explicitly set the width optimize again .
    This problem is not there in GRID ALV.
    Why is the HIER ALV not putting the date fields in MM/DD/YYYY format.

    Hi,
    Just pass the Edit option of the fieldcatalog for those specific fields...
    fcat-edit = 'X'.
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
        EXPORTING
          i_callback_program       = sy-cprog
          i_callback_pf_status_set = 'PF_STATUS_SET'
          i_callback_user_command  = 'USER_COMMAND'    "<----  pass this
          i_callback_top_of_page   = 'TOP'
          is_layout                = it_layout
          it_fieldcat              = it_fcat
          i_default                = 'X'
          i_save                   = 'A'
          it_events                = it_event
        TABLES
          t_outtab                 = it_final
        EXCEPTIONS
          program_error            = 1
          OTHERS                   = 2.
    *&      Form  USER_COMMAND
    *       text
    *      -->R_UCOMM      text
    *      -->RS_SELFIELD  text
    FORM user_command USING r_ucomm LIKE sy-ucomm
                            rs_selfield TYPE slis_selfield.
      CASE r_ucomm.
        WHEN '&DATA_SAVE'.                "<-------check this
          PERFORM save_data.
      ENDCASE.
    ENDFORM.                    "USER_COMMAND

  • Dates fields issue in the sales orders through IDOC

    Hi all,
    I'm trying to create a sales order through Inbound IDOC using IDOC_INPUT_ORDERS.
    I have 2 custom fields in the sales order which are date fields.
    I'm populating them from the IDOC segments(using E1EDKT1-tdid, E1EDKT2-tdline) in the user-exit just before the call trasacation statement. These dates are not coming for a few orders... but are getting populated normally for few of them.
    But the tricky part is that... if I process the same IDOC for the second time.. the order created is having the dates.
    My issue is that.. i need all the orders to be populated with the dates which IDOC have.
    Could anyone please help me with this issue.
    All the answers/replies are highly appreciated!!
    Thanks for you help!

    hi,
    the following may be of help
    There are 2 major exceptions to the batch input internal table. All text data from the segments E1EDKT1/2 and E1EDPT1/2 are not part of the batch input session. Text elements from these segments are posted directly to the data base AFTER the sales order has been created (please note: in order that item texts are posted correctly to the sales order the field POSEX from segment E1EDP01 MUST be transmitted). Also, configurable material information from the IDOC is exported to a global memory and therefore will not be part of the batch input session. Configurable material data is performed in Form CHECK_CONFIGURATION and the data is exported to global memory in the following code:
    Exportiert IDOC Daten ins globale Memory     
    export IDOC data to global memory            
      call function 'CUXC_IDOC_TO_MEMORY'          
           tables                                  
                cfg_instref = xe1curef             
                cfg_ref     = xe1cucfg             
                inst        = xe1cuins             
                part_of     = xe1cuprt             
                value       = xe1cuval             
           exceptions                              
                error       = 1                    
                others      = 2.                   
    One extra piece of useful infomartion if you wish to debug the process is to set your breakpoints at the following points
    for Function module IDOC_INPUT_ORDERS
    include LVEDAF0F..
      CALL TRANSACTION 'VA01' USING bdcdata
                                MODE  input_method
                                UPDATE 'S'
                                MESSAGES INTO xbdcmsgcoll.
    Now input_method is A foreground process and N for background processing but try changing it to P (pass through mode)
    If you have breakpoints in va01, e.g. MV45AF0B_BELEG_SICHERN,(the part where the sales document is saved)
    they will be hit, e.g
    perform userexit_save_document_prepare.
    perform userexit_save_document.
    Lastly, don't forget using program SDJEDI , see note 380603 for more information on this
    Hope this is useful
    Paul Quinn

  • Action date field in follow up action

    If you specify a follow-up action in a contract, you must enter a date or a date rule in the Action date field.
    question where is the Action date field i search for it in sales document contract and i don't found in contract data
    please help me

    HI
    i don't think so, there is such type of option while doing UD. actually when yo perform UD system makes completiond date automatically from Programbased.
    -ashok

  • Date field in Invoice

    Friends
    For VAT purposes we need to input a date of performance (date when the services ended) in the Invoices. This date does not flow from a sales order but needs to be entered at the line item level in the invoice.
    We have the Billing Date (on the header level), Service.rendered date (on each line item of the billing document) and created on date( header) but these cannot be used as some of them are linked to pricing and others are at header level and flow from SD order.
    is there a stand alone user defined date field that can be implemented

    Hi,
    You can use the field that is suggested by Ankur.
    One more option is to go for text field.
    Create a new "Z" text for your Sales document and billing document.
    You can manually enter values for this field.
    You can use this field for your requirement.
    Regards,
    Krishna.

  • Office 2010 Don't print blank lines when data fields are empty

    How do you do this in office 2010?  Where do we go now?
    This used to be under the Tools menu in previous versions:
    On the Tools menu, click Mail Merge.
    In the Mail Merge Helper dialog box, click Merge.
    NOTE: You must select the main document and data source for the merge before doing step 2.
    In the Merge dialog box, under When Merging Records, click either:
    Don't print blank lines when data fields are empty.
    -or-
    Print blank lines when data fields are empty.
    Click Close in the Merge dialog box, and then click
    Close in the Mail Merge Helper.
    NOTE: If you made no changes in the Merge dialog box, the
    Close button is not available. Click Cancel instead.
    Back to the top

    Hi,
    In Word 2007 or Word 2010, you can add
    Mail Merge Helper to the Quick Access bar, and perform the steps then.
    To add the
    Mail Merge Helper command in the Quick Access Toolbar in Word 2007, follow these steps:
    a.                 
    In Word 2007, click the
    Microsoft Office Button, and then click Word Options >
    Customize.
    In Word 2010, click File > Options > Customize Ribbon.
    b.                 
    Click
    All Commands in the Choose commands from, and then click
    Mail Merger Helper in the Choose commands from list.
    c.                  
    Click
    Add, and then OK.
    Best Regards,
    Sally Tang

  • Database DATE field

    I have a process on a page that loads a form's field data into a %rowtype variable and passes it to a package in the database.
    The package then performs an update to the table holding the data.
    There is a DATE field that I can not validate on the APEX front-end.
    So, if the user enters something in the date field that is invalid,
    it gets passed to the package on the back-end that's doing the update and it throws a database error back to APEX.
    Question: How can I intercept this during the update.
    Sure, I can update the other fields and send a flag back to APEX indicating that something is wrong with the date field and APEX can then redirect the user to a "You entered an invalid date field". But then I'm just re-creating the APEX validation that I can't use.
    This is where the customer has put me. 8-)
    Just wondered if I'm missing something. I guess I could do some formatting validation in the APEX process, but then I'm still just mimicking the APEX-provided validations.
    Thanks.
    Edited by: semaphore on Sep 15, 2008 8:33 AM

    Why can't you validate the date field in apex front end?
    You can certainly determine if the field is a valid date or if it falls within a range.
    It's just custom item validation that needs to be done.
    I have apex pages with start and end dates and I check that end date is greater or equal to start date, etc.
    If its got to be a range of dates you can pick a date from a lookup.
    You could also populate it from a calendar.

  • Count Function on Date Field in Discoverer

    Hey folks,
    How would someone like to educate a novice on how to perform a count function on a date field? I am writing a Discoverer report and have a requirement to display counts of the number of records that have a modified date of > 30 days and > 60 days. I am trying to use the 2nd count function as such: COUNT(modified_date < (SYSDATE - 30)). I keep getting the "error in function" message. I have tried it with CURRENT_DATE as well and get the same error.
    Any help would be greatly appreciated.
    Thanks much,
    Schuyler

    Hi
    You can't use logical expression in a function like this:
    COUNT(modified_date < (SYSDATE - 30))
    instead of this use CASE expression:
    COUNT(case when modified_date < (SYSDATE - 30) then 1 else 0 end)
    Ott Karesz
    http://www.trendo-kft.hu

  • Using DECODE() to insert to DATE field

    I'm trying to use the DECODE function to test for NULL before inserting to a DATE field. However, it seems to only insert the DATE, with a "default" time of 12:00 - it isn't properly inserting the time.
    Basically I need to test if Date1 is NULL. If it isn't I need to concatenate the DATE from Date1 with the TIME from Date2 to get a full date/time... then insert this new value.
    Generic Example:
    CREATE TABLE DATETEST (TestID NUMBER(1), TestDate DATE);
    DECLARE
    v_Date1 DATE;
    v_Date2 DATE;
    BEGIN
    v_Date1 := TO_DATE('01-JAN-11 05:53:12', 'DD-MON-YY HH:MI:SS');
    v_Date2 := TO_DATE('08-FEB-11 02:18:31', 'DD-MON-YY HH:MI:SS');
    INSERT INTO DATETEST (TestID, TestDate) VALUES ('1', DECODE(v_Date1, NULL, NULL, TO_DATE(To_Char(v_Date1, 'DD-MON-YY') || ' ' || TO_CHAR(v_Date2, 'HH:MI:SS'),'DD-MON-YY HH:MI:SS')));
    INSERT INTO DATETEST (TestID, TestDate) VALUES ('2', TO_DATE(To_Char(v_Date1, 'DD-MON-YY') || ' ' || TO_CHAR(v_Date2, 'HH:MI:SS'),'DD-MON-YY HH:MI:SS'));
    END;
    SELECT TestID, TO_CHAR(TestDate, 'DD-MON-YY HH:MI:SS') from DATETEST;
    This example performs two inserts. One with the DECODE function, and one without. The one without inserts the time properly. Can anyone tell me why the one with the DECODE function doesn't? I realize I can use a simple if/then to check if the date is null above and put the date/time in a variable, but since my real scenario is in a large chunk of other stuff, I'm trying to keep it as streamlined as possible.
    Edited by: BoredBillJ on Jul 14, 2011 6:39 AM

    The problem you are having is due to the nature of how DECODE and CASE determine what datatype to return, and you nls_date_format settings. Both use the data type of the first returnable argument to determine all of them. So, in your decode statement, the first returnable value is NULL which, in the absence of a cast (either implicit or explicit), is a varchar2 column. So, if the date is not null, the implicit conversion to a varchar to match the retunr type, then back to date to insert into the table is losing the time. you need something more like:
       INSERT INTO test_date (Test_ID, TestDate)
       VALUES ('1', DECODE(v_Date1, NULL, TO_DATE(NULL),
                                          TO_DATE(To_Char(v_Date1, 'DD-MON-YY') || ' ' ||
                                          TO_CHAR(v_Date2, 'HH:MI:SS'),'DD-MON-YY HH:MI:SS')));Even if you use Solomon's method of generating the date, if you need the decode/case, then you will have to either use the TO_DATE(NULL) or use case instead of decode and reverse the test so the first returnable is a date like:
    SQL> DECLARE
      2     v_Date1 DATE;
      3     v_Date2 DATE;
      4  BEGIN
      5     v_Date1 := TO_DATE('01-JAN-11 05:53:12', 'DD-MON-YY HH:MI:SS');
      6     v_Date2 := TO_DATE('08-FEB-11 02:18:31', 'DD-MON-YY HH:MI:SS');
      7     INSERT INTO test_date (Test_ID, TestDate)
      8     VALUES ('1', CASE WHEN v_date1 IS NOT NULL
      9                       THEN TO_DATE(To_Char(v_Date1, 'DD-MON-YY') || ' ' ||
    10                                    TO_CHAR(v_Date2, 'HH:MI:SS'),'DD-MON-YY HH:MI:SS')
    11                       ELSE NULL END);
    12     INSERT INTO test_date (Test_ID, TestDate)
    13     VALUES ('2', TO_DATE(To_Char(v_Date1, 'DD-MON-YY') || ' ' ||
    14                  TO_CHAR(v_Date2, 'HH:MI:SS'),'DD-MON-YY HH:MI:SS'));
    15  END;
    16  /
    PL/SQL procedure successfully completed.
    SQL> select test_id, to_char(testdate, 'dd-mon-yyyy hh24:mi:ss')
      2  from test_date;
       TEST_ID TO_CHAR(TESTDATE,'DD
             1 01-jan-2011 02:18:31
             2 01-jan-2011 02:18:31John

  • F4 Help on Date Field after ALV is generated

    Hi all,
    Now this might sound very simple question, but Im having some difficulty in this.
    My idea is to have an F4 on a date field after the ALV Grid report is executed. From the forum search, Ive got such answers as
    l_fieldcat-edit         = 'X'.
    l_fieldcat-F4AVAILABL = 'X'.
    Now the problem with this is that Im getting an error saying:
    The data object "l_fieldcat does not have a component called "F4AVAILABL".
    And then I tried this:
    l_fieldcat-fieldname    = 'DDATE'.
    l_fieldcat-ref_tabname = 'ZCST001'.
    Now at the report output when I press F4 on the date field it gives me a dump saying:
    Field symbol has not yet been assigned.
    Error in the ABAP Application Program
    The current ABAP program "CL_GUI_ALV_GRID===============CP" had to be terminated because it has come across a statement that unfortunately cannot be executed.
    So Im kinda stuck now. Appreciate your suggestions.
    PS: Im on ECC6.
    PPS:The intended F4 on field is a custom field in a custom table.
    pk

    Run This:
    report ztest.
    TYPE-POOLS : slis.
    tables:mara,makt.
    data: begin of it_final occurs 0,
    matnr like mara-matnr,
    ERSDA like mara-ERSDA,
    end of it_final.
    DATA: it_fieldcat     TYPE slis_t_fieldcat_alv,
          wa_fieldcat     TYPE slis_fieldcat_alv,
          wa_layout       TYPE slis_layout_alv,
          g_repid         TYPE sy-repid.
    START-OF-SELECTION.
    select matnr ERSDA into table it_final from mara up to 10 rows.
      g_repid = sy-repid.
      PERFORM f_populate_fieldcat.
    *  PERFORM fill_zpayroll.
      PERFORM f_call_alv.
    *&      Form  f_populate_fieldcat
    *       text
    *  -->  p1        text
    *  <--  p2        text
    form f_populate_fieldcat .
    "Mat Number
      CLEAR wa_fieldcat.
      wa_fieldcat-tabname     = it_final.
      wa_fieldcat-fieldname   = 'MATNR'.
      wa_fieldcat-seltext_l   = 'MAT No.'.
      wa_fieldcat-outputlen   = 15.
      APPEND wa_fieldcat TO it_fieldcat.
      "Creation date
      CLEAR wa_fieldcat.
      wa_fieldcat-tabname     = it_final.
      wa_fieldcat-fieldname   = 'ERSDA'.
      wa_fieldcat-edit         = 'X'.
    wa_fieldcat-REF_TABname = 'MARA'.
    wa_fieldcat-REF_FIELDname = 'ERSDA'.
      wa_fieldcat-seltext_l   = 'Creation date '.
      wa_fieldcat-outputlen   = 15.
      APPEND wa_fieldcat TO it_fieldcat.
    endform.                    " f_populate_fieldcat
    *&      Form  f_call_alv
    *       text
    *  -->  p1        text
    *  <--  p2        text
    form f_call_alv .
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
        EXPORTING
          i_callback_program     = g_repid
          i_callback_top_of_page = 'TOP-OF-PAGE'
          is_layout              = wa_layout
          it_fieldcat            = it_fieldcat[]
          I_SAVE = 'A'
        TABLES
          t_outtab               = it_final
        EXCEPTIONS
          program_error          = 1
          OTHERS                 = 2.
      IF sy-subrc <> 0.
        EXIT.
      ENDIF.
    endform.                    " f_call_alv
    In custom table date field must be referenced to DATS.
    P.S.I hope you already solved
    Cheers

  • Large memo meta-data field not fully indexed

    Hi all,
    Another day, another UCM question. Hopefully the answers given to my questions will help people facing the same issues.
    We created a custom meta-data field of type memo. Looking into the database, the following columns were created:
    colmeta.xcustom1 varchar2(2000)
    docmeta.xcustom1 varchar2(2000)
    idctext1.xcustom1 varchar2(2000)
    idctext2.xcustom1 varchar2(2000)
    indexerstate.xcustom1 varchar2(2000)
    We want specific meta-data field to hold 5000 characters and to have its content retrievable by the full text search feature. We've read about the MinMemoFieldSize and MemoFieldSize configuration variables, but reasoned that they shouldn't be used since the size requirement is only for this specific field.
    So we've updated the database schema manually (can't seem to find another way to do this) by sending these instructions:
    alter table colmeta modify xcustom1 varchar2(5000)
    alter table docmeta modify xcustom1 varchar2(5000)
    Then restarted the UCM server and performed a full collection index rebuild.
    These updates work in that new content can be created with a meta-data value of up to 5000 characters (it would give an error before). However, it seems that the whole field value is not being indexed: doing a full-text search on a word in the first 2000 field characters finds the content, but doing a full-text search on a word in the last 3000 field characters doesn't.
    We've tried manually extending the size of the idctext1, idctext2 and indexerstate columns, but it doesn't seem to have any impact.
    Any ideas?
    Thanks again.
    Edited by: Spiff on 1-Feb-2012 12:32 PM

    Hi ,
    Please check the following post's last update .
    Metadata fields sizes
    Thanks
    Srinath

  • DATE Field in composite key

    Hi experts.
    I have a small query.
    Is there any performance impact of having a date field along with four other fields as part of a composite key.

    When you add an index, there is definitely some overhead to get the index synchronized with newly added/updated data (so whether you have indexed NUMBER, CHARs or DATEs does not matter - but I could be wrong in saying this).
    Concerning your SELECTs, if your existing queries were using this index then they may or may not use the new index (due to cardinality and other factors); so you need to test this first.

Maybe you are looking for

  • Index Currency in Report Painter

    Hi, Is there any way I could include the Index Currency as a key figure for my Report Painter reports?

  • Can't open my InDesign CS6 documents

    All my saved doc will not open. I get the following message: Cannot open "name of doc.indde" because it was saved by a newer version of Adobe InDesign (CS7.0) I don't even have CS7...in fact does it even exist ? Plz help

  • How do i connect my 3gs iphone to my 2007 infinity g35

    I just tried to pair my 3gs iphone to a 2007 Infinity 35.  The bluetooth worked fine and so did the ipod.  But I could not charge the iphone via the G35's connection.  Any ideas?

  • ITunes 12.1 update fails

    I have been trying to install the update from iTunes 12.0.1 to iTunes 12.1.  For some reason the update fails and I seem to be stuck with iTunes 12.0.1 and the app store telling my permanently that I need the update. I have tried to delete iTunes so

  • Won't backup, why!

    My IPad says it has not backed up. I tried but it won't, why?