Assigning a time to a date item results always in 00:00 (Forms 6i)

Hi,
in Forms 6i I have a date item (called time) with the format mask hh24:mi.
If I enter at runtime a time in that field (eg. 08:00) everything is fine.
But if I assign the time to that item programmatically with
:block.time := to_date('08:00','HH24:MI');
or with
:block.time := to_date('01.01.2002 08:00','DD.MM.YYYY HH24:MI');
the displayed time is always "00:00".
Any hints on this?
Thanks in adcance,
Marco.

Hello,
Are you sure the item type is DATETIME ?
Francois

Similar Messages

  • Date field is always disabled in qlist form

    Hi,
      We are using Qlistform(quest webpart), and observed that when I include a date time field(new form), date field's calendar popup is greyed out, But I can only change the time, but not the date.  Any resolution for this?
    Thanks,
    Poonam

    Can you try deactivating the web part in a test site collection, or one that doesn't have any work you want to make live and see if this reverts to the normal behaviour.  This way you'll be able to see if the error can be attributed to that product.
    I'd also suggest downloading
    SharePoint Manager 2010 and seeing what property that field has been set to.
    Quest also maintain a community form which you might want to consider posting into. 
    http://communities.quest.com/community/sharepointforall/customization?view=discussions#/?filter=answered
    Steven Andrews
    SharePoint Business Analyst: LiveNation Entertainment
    Blog: baron72.wordpress.com
    Twitter: Follow @backpackerd00d
    My Wiki Articles:
    CodePlex Corner Series
    Please remember to mark your question as "answered" if this solves (or helps) your problem.

  • Using FIFO as sql code for assigning indicator to data items

    Hi All,
    We are looking for a solution based on FIFO algorithm. We have a table having following data:
    We need to perform FIFO on this table, and assign "object" as data items to other rows based on following conditions:
    1. first we have to group the rows based on "object" column
    2. then we have to traverse each group from the row having  minimum start time of the object group e.g: row id 1 for object group for "19O"
    2.1 Assign a "EqpREf" as "object" + <an integer value> , where integer value changes when the start and end chain finishes. Start -end chain is explained in step 2.2
    2.2 then we have to pick the "nextstarttime" of the picked row and compare it against the closest "starttime" of the rows having "start" as same as "end" of the same row we are
    picking for "nextstarttime" e.g: row id 2 of object 19O is having "nextstarttime" 0310 closest to "starttime" 0355 of row id 2 of object 19O , and rowid 2 is having "start" AAL which is similar to "end" of
    rowid 1 
    2.3 We have to perform this chain till we find end of same and allocate each row in a chain same "Eqpref"
    hence the output we need to generate will come as:
    Kindly help on same.
    Thanks in advance
    -Regards
    Kumud

    Hi,
    Please find the following code block for what is input data and what should be output.
    --The input data
    create table temp_table
    row_id int,
    engine_no varchar(20),
    schedule_no varchar(20),
    start_station varchar(20),
    end_station varchar(20),
    startdate datetime,
    enddate datetime,
    starttime datetime,
    endtime datetime,
    record_id int,
    engine_id int,
    Mgt int,
    nextstarttime datetime,
    Schedule_ref varchar(20),
    Engine_Ref varchar(20)
    GO
    insert into temp_table values(1,'19O','101','SGC','IXP','2015/01/01','2015/01/01','00:00:00','01:00:00',1,10,60,'02:00:00',null,null)
    insert into temp_table values(2,'19O','102','SGC','IXP','2015/01/01','2015/01/01','00:30:00','01:30:00',2,10,60,'02:30:00',null,null)
    insert into temp_table values(3,'19O','103','SGC','IXP','2015/01/01','2015/01/01','02:30:00','03:30:00',3,10,60,'04:30:00',null,null)
    insert into temp_table values(4,'19O','104','IXP','DFW','2015/01/01','2015/01/01','03:30:00','04:00:00',4,10,60,'05:00:00',null,null)
    insert into temp_table values(5,'19O','105','IXP','DFW','2015/01/01','2015/01/01','04:30:00','05:30:00',5,10,60,'06:30:00',null,null)
    insert into temp_table values(6,'19O','106','DFW','ABC','2015/01/01','2015/01/01','05:00:00','06:00:00',6,10,60,'07:00:00',null,null)
    insert into temp_table values(7,'19O','107','DFW','ABC','2015/01/01','2015/01/01','06:00:00','07:00:00',7,10,60,'08:00:00',null,null)
    insert into temp_table values(8,'19O','108','DFW','ABC','2015/01/01','2015/01/01','07:00:00','08:00:00',8,10,60,'09:00:00',null,null)
    insert into temp_table values(9,'19O','109','ABC','DEF','2015/01/01','2015/01/01','10:00:00','11:30:00',9,10,60,'12:30:00',null,null)
    insert into temp_table values(10,'19O','110','XYZ','BDW','2015/01/01','2015/01/01','13:00:00','15:00:00',10,10,60,'16:00:00',null,null)
    insert into temp_table values(1,'319','111','PQR','STU','2015/01/01','2015/01/01','00:00:00','01:00:00',11,11,60,'02:00:00',null,null)
    insert into temp_table values(2,'319','211','PQR','STU','2015/01/01','2015/01/01','04:30:00','15:30:00',12,11,60,'16:30:00',null,null)
    insert into temp_table values(3,'319','112','STU','ABC','2015/01/01','2015/01/01','05:00:00','06:00:00',13,11,60,'07:00:00',null,null)
    insert into temp_table values(4,'319','212','STU','DEF','2015/01/01','2015/01/01','06:00:00','07:00:00',14,11,60,'08:00:00',null,null)
    insert into temp_table values(5,'319','213','STU','PQR','2015/01/01','2015/01/01','07:00:00','08:00:00',15,11,60,'09:00:00',null,null)
    insert into temp_table values(6,'319','118','STU','XYZ','2015/01/01','2015/01/01','10:00:00','11:30:00',16,11,60,'12:30:00',null,null)
    insert into temp_table values(7,'319','119','DEF','JKL','2015/01/01','2015/01/01','13:00:00','15:00:00',17,11,60,'16:00:00',null,null)
    insert into temp_table values(8,'319','215','ABC','MNO','2015/01/01','2015/01/01','17:00:00','20:00:00',18,11,60,'21:00:00',null,null)
    insert into temp_table values(1,'19O','101','SGC','IXP','2015/01/01','2015/01/01','00:00:00','01:00:00',1,10,60,'02:00:00',null,null)
    insert into temp_table values(2,'19O','102','SGC','IXP','2015/01/01','2015/01/01','00:30:00','01:30:00',2,10,60,'02:30:00',null,null)
    insert into temp_table values(3,'19O','103','SGC','IXP','2015/01/01','2015/01/01','02:30:00','03:30:00',3,10,60,'04:30:00',null,null)
    insert into temp_table values(4,'19O','104','IXP','DFW','2015/01/01','2015/01/01','03:30:00','04:00:00',4,10,60,'05:00:00',null,null)
    insert into temp_table values(5,'19O','105','IXP','DFW','2015/01/01','2015/01/01','04:30:00','05:30:00',5,10,60,'06:30:00',null,null)
    insert into temp_table values(6,'19O','106','DFW','ABC','2015/01/01','2015/01/01','05:00:00','06:00:00',6,10,60,'07:00:00',null,null)
    insert into temp_table values(8,'19O','108','DFW','ABC','2015/01/01','2015/01/01','07:00:00','08:00:00',8,10,60,'09:00:00',null,null)
    insert into temp_table values(9,'19O','109','ABC','DEF','2015/01/01','2015/01/01','10:00:00','11:30:00',9,10,60,'12:30:00',null,null)
    insert into temp_table values(1,'319','111','PQR','STU','2015/01/01','2015/01/01','00:00:00','01:00:00',11,11,60,'02:00:00',null,null)
    insert into temp_table values(2,'319','211','PQR','STU','2015/01/01','2015/01/01','04:30:00','15:30:00',12,11,60,'16:30:00',null,null)
    insert into temp_table values(3,'319','112','STU','ABC','2015/01/01','2015/01/01','05:00:00','06:00:00',13,11,60,'07:00:00',null,null)
    insert into temp_table values(4,'319','212','STU','DEF','2015/01/01','2015/01/01','06:00:00','07:00:00',14,11,60,'08:00:00',null,null)
    insert into temp_table values(5,'319','213','STU','PQR','2015/01/01','2015/01/01','07:00:00','08:00:00',15,11,60,'09:00:00',null,null)
    insert into temp_table values(6,'319','118','STU','XYZ','2015/01/01','2015/01/01','10:00:00','11:30:00',16,11,60,'12:30:00',null,null)
    insert into temp_table values(7,'319','119','DEF','JKL','2015/01/01','2015/01/01','13:00:00','15:00:00',17,11,60,'16:00:00',null,null)
    insert into temp_table values(8,'319','215','ABC','MNO','2015/01/01','2015/01/01','17:00:00','20:00:00',18,11,60,'21:00:00',null,null)
    --output should come as the data in temp_table_final
    create table temp_table_final
    row_id int,
    engine_no varchar(20),
    schedule_no varchar(20),
    start_station varchar(20),
    end_station varchar(20),
    startdate datetime,
    enddate datetime,
    starttime datetime,
    endtime datetime,
    record_id int,
    engine_id int,
    Mgt int,
    nextstarttime datetime,
    Schedule_ref varchar(20),
    Engine_Ref varchar(20)
    GO
    insert into temp_table_final values(1,'19O','101','SGC','IXP','2015/01/01','2015/01/01','00:00:00','01:00:00',1,10,60,'02:00:00','101','19O-1')
    insert into temp_table_final values(2,'19O','102','SGC','IXP','2015/01/01','2015/01/01','00:30:00','01:30:00',2,10,60,'02:30:00','102','19O-2')
    insert into temp_table_final values(3,'19O','103','SGC','IXP','2015/01/01','2015/01/01','02:30:00','03:30:00',3,10,60,'04:30:00','103','19O-3')
    insert into temp_table_final values(4,'19O','104','IXP','DFW','2015/01/01','2015/01/01','03:30:00','04:00:00',4,10,60,'05:00:00','101','19O-1')
    insert into temp_table_final values(5,'19O','105','IXP','DFW','2015/01/01','2015/01/01','04:30:00','05:30:00',5,10,60,'06:30:00','102','19O-2')
    insert into temp_table_final values(6,'19O','106','DFW','ABC','2015/01/01','2015/01/01','05:00:00','06:00:00',6,10,60,'07:00:00','104','19O-1')
    insert into temp_table_final values(7,'19O','107','DFW','ABC','2015/01/01','2015/01/01','06:00:00','07:00:00',7,10,60,'08:00:00','107','19O-4')
    insert into temp_table_final values(8,'19O','108','DFW','ABC','2015/01/01','2015/01/01','07:00:00','08:00:00',8,10,60,'09:00:00','105','19O-2')
    insert into temp_table_final values(9,'19O','109','ABC','DEF','2015/01/01','2015/01/01','10:00:00','11:30:00',9,10,60,'12:30:00','106','19O-1')
    insert into temp_table_final values(10,'19O','110','XYZ','BDW','2015/01/01','2015/01/01','13:00:00','15:00:00',10,10,60,'16:00:00','110','19O-5')
    insert into temp_table_final values(1,'319','111','PQR','STU','2015/01/01','2015/01/01','00:00:00','01:00:00',11,11,60,'02:00:00','111','319-1')
    insert into temp_table_final values(2,'319','211','PQR','STU','2015/01/01','2015/01/01','04:30:00','15:30:00',12,11,60,'16:30:00','211','319-2')
    insert into temp_table_final values(3,'319','112','STU','ABC','2015/01/01','2015/01/01','05:00:00','06:00:00',13,11,60,'07:00:00','111','319-1')
    insert into temp_table_final values(4,'319','212','STU','DEF','2015/01/01','2015/01/01','06:00:00','07:00:00',14,11,60,'08:00:00','212','319-3')
    insert into temp_table_final values(5,'319','213','STU','PQR','2015/01/01','2015/01/01','07:00:00','08:00:00',15,11,60,'09:00:00','213','319-4')
    insert into temp_table_final values(6,'319','118','STU','XYZ','2015/01/01','2015/01/01','10:00:00','11:30:00',16,11,60,'12:30:00','118','319-5')
    insert into temp_table_final values(7,'319','119','DEF','JKL','2015/01/01','2015/01/01','13:00:00','15:00:00',17,11,60,'16:00:00','212','319-3')
    insert into temp_table_final values(8,'319','215','ABC','MNO','2015/01/01','2015/01/01','17:00:00','20:00:00',18,11,60,'21:00:00','112','319-1')
    insert into temp_table_final values(1,'19O','101','SGC','IXP','2015/01/01','2015/01/01','00:00:00','01:00:00',1,10,60,'02:00:00','101','19O-1')
    insert into temp_table_final values(2,'19O','102','SGC','IXP','2015/01/01','2015/01/01','00:30:00','01:30:00',2,10,60,'02:30:00','102','19O-2')
    insert into temp_table_final values(3,'19O','103','SGC','IXP','2015/01/01','2015/01/01','02:30:00','03:30:00',3,10,60,'04:30:00','103','19O-3')
    insert into temp_table_final values(4,'19O','104','IXP','DFW','2015/01/01','2015/01/01','03:30:00','04:00:00',4,10,60,'05:00:00','101','19O-1')
    insert into temp_table_final values(5,'19O','105','IXP','DFW','2015/01/01','2015/01/01','04:30:00','05:30:00',5,10,60,'06:30:00','102','19O-2')
    insert into temp_table_final values(6,'19O','106','DFW','ABC','2015/01/01','2015/01/01','05:00:00','06:00:00',6,10,60,'07:00:00','104','19O-1')
    insert into temp_table_final values(8,'19O','108','DFW','ABC','2015/01/01','2015/01/01','07:00:00','08:00:00',8,10,60,'09:00:00','105','19O-2')
    insert into temp_table_final values(9,'19O','109','ABC','DEF','2015/01/01','2015/01/01','10:00:00','11:30:00',9,10,60,'12:30:00','106','19O-1')
    insert into temp_table_final values(1,'319','111','PQR','STU','2015/01/01','2015/01/01','00:00:00','01:00:00',11,11,60,'02:00:00','111','319-1')
    insert into temp_table_final values(2,'319','211','PQR','STU','2015/01/01','2015/01/01','04:30:00','15:30:00',12,11,60,'16:30:00','211','319-2')
    insert into temp_table_final values(3,'319','112','STU','ABC','2015/01/01','2015/01/01','05:00:00','06:00:00',13,11,60,'07:00:00','111','319-1')
    insert into temp_table_final values(4,'319','212','STU','DEF','2015/01/01','2015/01/01','06:00:00','07:00:00',14,11,60,'08:00:00','212','319-3')
    insert into temp_table_final values(5,'319','213','STU','PQR','2015/01/01','2015/01/01','07:00:00','08:00:00',15,11,60,'09:00:00','213','319-4')
    insert into temp_table_final values(6,'319','118','STU','XYZ','2015/01/01','2015/01/01','10:00:00','11:30:00',16,11,60,'12:30:00','118','319-5')
    insert into temp_table_final values(7,'319','119','DEF','JKL','2015/01/01','2015/01/01','13:00:00','15:00:00',17,11,60,'16:00:00','212','319-3')
    insert into temp_table_final values(8,'319','215','ABC','MNO','2015/01/01','2015/01/01','17:00:00','20:00:00',18,11,60,'21:00:00','112','319-1')
    What we are doing here is generating a schedule for Trains departures.
    here, we should identify the train schedules making a chain of stations considering the endstation of a train engine no should be startstation of another record for same engineno. also the starttime of engineno should be nearest of nextstarttime
    of same station. 
    for example : if we pick Ist row "SGC-IXP", nextstarttime for same is "02:00:00 am". this means train departed from SGC will reach to IXP and is available for departure from IXP after "02:00:00 am". So we have to consider
    the record having startstation as IXP and nearest starttime to nextstarttime ("02:00:00"). So the next train departure would be IXP-DFW having starttime as "03:30:00 am".
    here you can see we have to assign the scheduleno of previously considered record to the chained schedule so we have updated the "schedule_ref" as 101. Also we have to assign the engine no - <counter of integer> to a single chain on schedule
    given in engine_ref.
    Regards
    Kumud

  • Time display in a date item

    This is driving me nuts.
    I am trying to dynamically set the initial
    value of a date item in a form.
    I want it to be one hour prior to the current
    system time rounded down to the hour.
    i.e. if the current datetime is 07/26/2001 09:12, I want to see 07/26/2001 08:00 in the forms item.
    The code I use is as follows;
    DECLARE
    -- get system datetime
    v_date_time_char_format VARCHAR2(25) := :SYSTEM.CURRENT_DATETIME;
    -- variable for hour
    v_hour VARCHAR2(2);
    BEGIN
    IF :tbl_sf_site.test_date IS NULL THEN
    -- go back one hour and delete blank caused by to_char
    v_hour := SUBSTR(TO_CHAR(TO_NUMBER(SUBSTR(v_date_time_char_format,13,2)) - 1,'09'),2,2);
    -- set time back one hour
    v_date_time_char_format :=
    SUBSTR(v_date_time_char_format,1,12)&#0124; &#0124;v_hour&#0124; &#0124;':00';
    -- display the results of date manipulation
    f_alert.ok('v_date_time_char_format is '&#0124; &#0124;v_date_time_char_format);
    -- set value of form item
    :tbl_sf_site.test_date :=
    TO_DATE(v_date_time_char_format,'DD-MON-YYYY HH24:MI');
    END IF;
    END;
    The f_alert.ok that displays is 26-JUL-2001 08:00, but the value in the item is 07/26/2001 00:00.
    The forms item is a text item of date format with a mask of 'mm/dd/rrrr hh24:mi.

    Change the data type of forms item into datetime then try.
    null

  • Product Delivered Status Date should be there in Standard Search and Result screen of Service Order. Product Delivered Time should be Available in Result screen and In Result screen there should be facility to filter on basis of Time

    Hi team,
    Product Delivered Status Date should be there in Standard Search and Result screen of Service Order. Product Delivered Time should be Available in Result screen and In Result screen there should be facility to filter on basis of Time.
    How to add this in search result screen.configuration is it possible? or any changes in development.
    Compnent - BT116S_SRVO
    Thanks
    Kalpana

    Hi Kalpana
    Please reread my comment. I said to try and populate "Requested End" Date.
    Make sure the date profile assigned to the transaction includes this Date Type.
    If you can populate this either manually or programmatically the solution should be quite forward.
    If you cannot do this, only then should you look at making any significant sort of enhancement.
    Regards
    Arden

  • Average Result of a data item not Average!

    I have chosen to install a Grand Total (at right) Columns Average of a calculated field, which is a data item in my crosstab report. The resulting values are always erroneous. The Sum(data-item) Total columns I use are accurate, but the Average Total is always wrong! Even if it counted null columns (hope not!) to divide the sum of the calculations by, it wouldn't be the figure it comes up with. Am I the first or just most recent to get bogus average results from a Total?

    Hello,
    I have this kind of problem either. First I thought, it migth be the statistical mean (sum/count-1) when using "calculate over all page items" - but this can also be a coincidence.
    When using "calculate only page items of this page" (sorry, don't know the exact label of the feature, I'm using a german version of Disco) then I get a number that is calculated as follows: sum over all items / count of distinct items of the attribute being most left on the crosstab.
    Example:
    Country City Number of x Number of y
    Germany
    Berlin 4 6
    Munich 5 3
    France
    Paris 3 3
    Lyon 4 2
    AVG 8 7
    What I want: 4 3,5
    There I get the average for the sum of all countries - but I want the average over all cities!
    Does anyone can give a hint? (workararound is to export in Excel - but that's not really what I want ...)
    THX!
    Greetings,
    Alex

  • Assigning an exact time to poll data using JDBC Sender?

    Hi,
    Can I set an exact time to poll data from DB table?
    I means, I want to poll data every 6 pm.
    But I cannot find fields to set starting time.
    There exist only poll interval fields in JDBC Sender.
    Regards,
    hiyoung.

    Hi,
    This might help you out.
    To run a scenario exactly once in a specific time every day    / schedule a adapter
    Note:You will need to have the authorizations of the user group SAP_XI_ADMINISTRATOR with the role modify.
    Go to Runtime Workbench -> Component Monitoring -> Communication Channel Monitoring
    Locate the link Availability Time Planning on the top right corner of your Communication Channel Monitoring page.
    In your case, the requirement is to schedule the Sender file adapter daily once at 12:00 at midnight.
    In Availability Time Planning, choose the Availability time as daily and say create.
    Provide the details like the time 12:00
    Then select the communication channel , goto the Communication Channels tab and filter and add the respective channel (File Sender).
    Once all the above has been done 'Save' the changes.
    <b>Cheers
    *RAJ*
    *REWARD POINTS IF FOUND USEFULL*</b>

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • Account based COPA datsource taking long time to extract data

    Hi
    We have created a Account based COPA datasource but it is not extracting data in RSA3 even though the underlying tables have data in it.
    If the COPA datasource is created using fields only from CE4 (segment ) and not CE1 (line items ) table then it extracts data but tat too after very long time.
    If the COPA datasource is created using fields from CE4 (segment ) and  CE1 (line items ) table then it does not extarct any records and RSA3 gives a time out error..
    Also job scheduled from BW side for extracting data goes on for days but does not fetch any data and neither gives any error.
    The COPA tables have huge amount of data and so performance could be a issue. But we have also created the indexes on them. Still it is not helping.
    Please suggest a solution to this...
    Thanks
    Gaurav

    Hi Gaurav
    Check this note 392635 ,,might be usefull
    Regards
    Jagadish
    Symptom
    The process of selecting the data source (line item, totals table or summarization level) by the extractor is unclear.
    More Terms
    Extraction, CO-PA, CE3XXXX, CE1XXXX, CE2XXXX, costing-based, account-based,profitability analysis, reporting, BW reporting, extractor, plug-in, COEP,performance, upload, delta method, full update, CO-PAextractor, read, datasource, summarization level, init, DeltaInit, Delta Init Cause and Prerequisites
    At the time of the data request from BW, the extractor determines the data source that should be read. In this case, the data source to be used depends on the update mode (full initialization of the deltamethod or delta update), and on the definition of the DataSources (line item characteristics (except for REC_WAERS FIELD) or calculated key figures) and the existing summarization levels.
    Solution
    The extractor always tries to select the most favorable source, that is,the one with the lowest dataset. The following restrictions apply:
    o Only the 'Full' update mode from summarization levels is
    supported during extraction from the account-based profitability
    analysis up to and including Release PI2001.1. Therefore, you can
    only everload individual periods for a controlling area. You can
    also use the delta method as of Release PI2001.2. However, the
    delta process is only possible as of Release 4.0. The delta method
    must still be initialized from a summarization level. The following
    delta updates then read line items. In the InfoPackage, you must
    continue to select the controlling area as a mandatory field. You
    then no longer need to make a selection on individual periods.
    However, the period remains a mandatory field for the selection. If
    you do not want this, you can proceed as described in note 546238.
    o To enable reading from a summarization level, all characteristics
    that are to be extracted with the DataSource must also be contained
    in this level (entry * in the KEDV maintenance transaction). In
    addition, the summarization level must have status 'ACTIVE' (this
    also applies to the search function in the maintenance transaction
    for CO-PA data sources, KEB0).
    o For DataSources of the costing-based profitability analysis,
    30.03.2009 Page 2 of 3
    SAP Note 392635 - Information: Sources with BW extraction from the CO-PA
    data can only be read from a summarization level if no other
    characteristics of the line item were selected (the exception here
    is the 'record currency' (REC_WAERS) field, which is always
    selected).
    o An extraction from the object level, that is, from the combination
    of tables CE3XXXX/CE4XXXX ('XXXX' is the name of the result area),
    is only performed for full updates if (as with summarization
    levels) no line item characteristics were selected. During the
    initialization of the delta method this is very difficult to do
    because of the requirements for a consistent dataset (see below).
    o During initialization of the delta method and subsequent delta
    update, the data needs to be read up to a defined time. There are
    two possible sources for the initialization of the delta method:
    - Summarization levels manage the time of the last update/data
    reconstruction. If no line item characteristics were selected
    and if a suitable, active summarization level (see above)
    exists, the DataSource 'inherits' the time information of the
    summarization level. However, time information can only be
    'inherited' for the delta method of the old logic (time stamp
    administration in the profitability analysis). As of PlugIn
    Release PI2004.1 (Release 4.0 and higher), a new logic is
    available for the delta process (generic delta). For
    DataSources with the new logic (converted DataSources or
    DataSources recreated as of Plug-In Release PI2004.1), the line
    items that appear between the time stamp of the summarization
    level and the current time minus the security delta (usually 30
    minutes) are also read after the suitable summarization level
    is read. The current time minus the security delta is set as
    the time stamp.
    - The system reads line items If it cannot read from a
    summarization level. Since data can continue to be updated
    during the extraction, the object level is not a suitable
    source because other updates can be made on profitability
    segments that were already updated. The system would have to
    recalculate these values by reading of line items, which would
    result in a considerable extension of the extraction time.
    In the case of delta updates, the system always reads from line
    items.
    o During extraction from line items, the CE4XXXX object table is read
    as an additional table for the initialization of the delta method
    and full update so that possible realignments can be taken into
    account. In principle, the CE4XXXX object table is not read for
    delta updates. If a realignment is performed in the OLTP, no
    further delta updates are possible as they would make the data
    inconsistent between OLTP and BW. In this case, a new
    initialization of the delta method is required.
    o When the system reads data from the line items, make sure that the
    30.03.2009 Page 3 of 3
    SAP Note 392635 - Information: Sources with BW extraction from the CO-PA
    indexes from note 210219 for both the CE1XXXX (actual data) and
    CE2XXXX (planning data) line item tables have been created.
    Otherwise, you may encounter long-running selections. For
    archiving, appropriate indexes are delivered in the dictionary as
    of Release 4.5. These indexes are delivered with the SAP standard
    system but still have to be created on the database.

  • The conversion of a varchar data type to a datetime data type resulted in an out-of-range value

    I am trying to insert records into a temporary table with date values concatenated with other string values  into one large string value.I am getting the following error:
    Msg 242, Level 16, State 3, Line 12
    The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
    Msg 241, Level 16, State 1, Line 28
    Conversion failed when converting date and/or time from character string.
    -My code below
    Declare
           @hdrLOCAL char(255),                                                       
        @CR char(255),                                                             
        @BLDCHKDT DATETIME,                                                         
        @BLDCHTIME DATETIME,                                                         
        @hdrline int
        SELECT @hdrLOCAL = DDLINE FROM DD40400 WHERE INDXLONG =1
        SELECT @CR = DDLINE FROM DD40400 WHERE INDXLONG =2
        SELECT @hdrline =1
        SELECT
                @BLDCHKDT = CONVERT(varchar(20),T756.PAYDATE,105) ,
                -- convert(varchar,getdate(),15)
                @BLDCHTIME= CONVERT(varchar(20),T756.PAYDATE,105)
                FROM STATS.dbo.DD10500 T762
                LEFT OUTER JOIN STATS.dbo.DD10400 T756 ON (
                        T762.INDXLONG = T756.INDXLONG
                        AND T756.INCLPYMT = 1
                WHERE (T756.INCLPYMT = 1)
                    AND (T762.DDAMTDLR <> 0)
      Create TABLE [dbo].[##DD10200B](
        [INDXLONG] [int] NOT NULL,
        [DDLINE] [varchar](8000) NOT NULL,
        [DEX_ROW_ID] [int] IDENTITY(1,1) NOT NULL,
    BEGIN
    INSERT INTO ##DD10200B (INDXLONG,DDLINE)
            VALUES (1,@hdrLOCAL +',' + @CR +','+ @BLDCHKDT +',' + @BLDCHTIME )
    END
    Msg 242, Level 16, State 3, Line 12
    The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
    Msg 241, Level 16, State 1, Line 28
    Conversion failed when converting date and/or time from character string.
    The Best thing in Life is Life

    Since the Variable
    BLDCHKDT and BLDCHTIME are of type date time why are you trying to assign it a value
    of type varchar
    and the format 105 gives you dd-mm-yyyy but SQL server takes the default format as mm-dd-yyyy so the error occurs for all dates that
    are greater than 12
    try the below code
    Declare
    @hdrLOCAL char(255),
    @CR char(255),
    @BLDCHKDT Varchar(50),
    @BLDCHTIME Varchar(50),
    @hdrline int
    SELECT @hdrLOCAL = DDLINE FROM DD40400 WHERE INDXLONG =1
    SELECT @CR = DDLINE FROM DD40400 WHERE INDXLONG =2
    SELECT @hdrline =1
    SELECT
    @BLDCHKDT = CONVERT(varchar(20),T756.PAYDATE,105) ,
    -- convert(varchar,getdate(),15)
    @BLDCHTIME= CONVERT(varchar(20),T756.PAYDATE,105)
    FROM STATS.dbo.DD10500 T762
    LEFT OUTER JOIN STATS.dbo.DD10400 T756 ON (
    T762.INDXLONG = T756.INDXLONG
    AND T756.INCLPYMT = 1
    WHERE (T756.INCLPYMT = 1)
    AND (T762.DDAMTDLR <> 0)
    Create TABLE [dbo].[##DD10200B](
    [INDXLONG] [int] NOT NULL,
    [DDLINE] [varchar](8000) NOT NULL,
    [DEX_ROW_ID] [int] IDENTITY(1,1) NOT NULL,
    BEGIN
    INSERT INTO ##DD10200B (INDXLONG,DDLINE)
    VALUES (1,@hdrLOCAL +',' + @CR +','+ @BLDCHKDT +',' + @BLDCHTIME )
    END
    the only change done is 
    @BLDCHKDT Varchar(50),
    @BLDCHTIME Varchar(50),
    Surender Singh Bhadauria
    My Blog

  • Modify code to pull the time dependent master data

    I fully under stand the suggestion below for the requirement to add the time dependent attribute comp code
    thanks fo rthe help but please tell me if there is a way i can modify the abap code and make the user enter the value for the date on which he want to pull th emaster data for company code or keydate to and from and pull the master data, so how will i proceede should i create the variable on 0doc_date and how to modify the code. please help . i have opened another question with same desc as above to assign points
    thanks
    soniya
    The literal within <..> is supposed to be replaced by the actual field name (as I didn't know the fields). In this case, I am changing your code for costcenter/company-code.
    data : wa like /bi0/qcostcenter.
    select single * from /bi0/qcostcenter into wa
    where costcenter = comm_structure-costcenter
    and objvers = 'A'
    and datefrom le comm_structure-<keydatefield>
    and dateto ge comm_structure-<keydatefield>.
    if sy-subrc = 0.
    result = wa-comp_code.
    endif.
    abort = 0.
    You can use this code for update rule of company_code. You have to replace '<keydatefield>' with a field name that contains the date on which the company is to be derived. If there is a date in your comm_structure (eg aedat) which you can use, you can specify that field in place of this literal (instead of comm_structure-<keydatefld> use comm_structure-aedat). If you have no such field, and you wish to use current date for getting the company code from time-dependent master data, you can use sy-datum (ie replace comm_strucutre-<keydatefld> with sy-datum).
    And it should work.
    The 'master data attribute' option is one of the options when you create update rule (one of the radio button options).

    That the code is doing anyway.
    If your txn data in the cube doesn't have a date, how does it know it is Feb data, or, it is March data?
    If it has a date or month field, you should modify and use this code to update the company based on that date instead of system date.
    Other than that minor variation, it is already doing what you look for.

  • Remove time stamp from date

    hi guys
    Had start date and end date as prompt in webi report and showing same date value in report header using UserResponse function but it is showing time with date, and in report both columns are showing in dd/mm/yyyy format, have tried using FormatDate(UserResponse("Start Date:"); "dd mmm yyyy"), it is giving an error
    " The expression/sub-expression uses an invalid data type"
    please tell me what is wrong in the formula
    thanks & regards

    Hi,
    It is creating an #ERROR when the format you assign to the ToDate function does not match the date in the string.
    E.g. if you have a user response like '1/1/1998 12:00:00AM' you could use the formula:
    =FormatDate(ToDate(Left(UserResponse("prompt");8);"M/d/yyyy");"dd MMM yyyy")
    to get a result as: 01 JAN 1998
    However this fails when the date changes to 10/1/1998 or 1/10/1998 or 10/10/1998. In those cases the format of the ToDate function should be: "MM/d/yyyy" or "M/dd/yyyy" or "MM/dd/yyyy".
    I know it's a lot of work, but the best solution is to create variables for month, day and year:
    1. variable for month v_month
    =Left([prompt_date];Pos([prompt_date];"/")-1)
    2. we need to get rid of the first part of the string to find the second '/' so we create a dummy variable dummy1
    =Right([prompt_date];Length([prompt_date])-(Length([v_month])+1))
    3. variable for day v_day
    =Left([dummy1];Pos([dummy1];"/")-1)
    4. create a second dummy variable dummy 2 to get rid of the day part of the string
    ==Right([dummy1];Length([dummy1])-(Length([v_day])+1))
    5. variable for year v_year
    =Left([dummy2];4)
    Just assuming that the year is always 4 positions in the prompt string.
    Now you can create your variable for displaying the date in the format you want:
    ==FormatDate(ToDate(FormatNumber(ToNumber([v_month]);"00")+FormatNumber(ToNumber([v_day]);"00")+FormatNumber(ToNumber([v_year]);"0000");"MMddyyyy");"dd MMM yyyy")
    Note: I was just assuming that your date format is always month/day/year. If the format is different then you need to change the variables accordingly.
    Hope this helps
    Harry

  • Time Capsule - retrieving data

    Hi all
    harddrive failed on my mac and want to try and reclaim some previous data from the time capsule while this is being worked on
    Am using a macbook in its place to continue working. when i go "into" the time capsule though i cannot see all of the mac's saved data. does it mean that it isn't there to retrieve or just that i can only access from that one computer?
    thanks

    What data are you after.. you will find access to things like itunes or iphoto restricted because you need to restore the entire library before you can access content inside the library.
    See instructions here for restore of specific items.
    Q15 http://pondini.org/TM/FAQ.html
    I encourage people to read the whole section on restore.. Q14-17
    If you need the best way IMHO is to get a suitable sized usb hard disk and do a full restore of the whole hard disk on your dead mac. Then you can find out the reality of what is there and what is perhaps missing. TM is not 100% and especially if a drive has been failing over time.. certain problems can result in stuff not being backed up at all.

  • Data not uploading in Time dependent Master data Infoobject

    Hello All,
    I have a master data infoobject for HR entity and have to load data from PSA to that info object.
    The HR entity infoobject already have sone data like below:
    HR Entity
    Version
    Date from
    Date To
    x
    A
    01.07.2013
    31.12.9999
    x
    A
    19.04.2013
    30.06.2013
    x
    A
    01.09.2012
    18.04.2013
    x
    A
    01.01.2012
    31.08.2012
    x
    A
    01.01.1000
    31.12.2011
    Now the data in PSA is as follows:
    HR Entity
    Start Date
    End Date
    X
    01.01.2012
    18.12.2013
    Once I loaded this data to the infoobject, i can not see this value which is the latest value of this HR entity.
    Can somebody please explain how the data gets loaded in the time dependent master data infoobject and why this entry is not getting loaded in the info object.
    Regards
    RK

    Hi,
    did you activate master data after your load?
    You can check also version 'M' records and see if your record is there.
    The load went green?
    The problem is, that your entry overlaps all exisitng time intervals, which can't be deleted or merged as there may be dependent transactional data. You have first to delete the transactional data for this entity.
    Then you can delete the time-dependent data and reoload it from your PSA.
    BW will build then correct time intervals.
    The easiest is to change the time interval in PSA, see example below:
    At the moment the time interval is not accepted. But you can add time intervalls before 31.12.2011 and after 01.07.2013, Then system will create remaiing time intervals, e.g. your new record is:
    HR Entity
    Start Date
    End Date
    X
    01.08.2013
    18.12.2013
    Result will be:
    HR Entity
    Version
    Date from
    Date To
    x
    A
    19.12.2013
    31.12.9999
    x
    A
    01.08.2013
    18.12.2013
    x
    A
    01.07.2013
    31.07.2013
    Regards, Jürgen

  • Schedule lines should not change 3 weeks time from MRP date

    Hi
    Is there any specific customisation required to get following results in case of schedule lines.
    1) MRP should not change/adjust the already send delivery schedule lines  which are lying within 3 weeks time from MRP date
    2) System should not allow to book the GRN unless the messages / print out has been generated from the system in case of scheduling agreement.
    Thanking you
    Shantakumar
    09944009752

    Hi,
    There are 2 possibilities:
         1. use MRP fixing using planning time fences, this is done by putting the relevant settings in the MRP1 view of the material master
         2. use the firming options of the scheduling agreement, these are set per material number that is entered in the scheduling agreement.
    The GR can only be posted for schedule lines that have been processed aka output generated.
    Regards,

Maybe you are looking for

  • Reinstall issues CS3 and Windows 8.1

    I am trying to reinstall my CS3 Extended on Windows 8.1.  My CD says the installer is corrupt.  I downloaded the installer from the website and it says that the installer database is corrupt.  Is here a fix?

  • How to acces music files on a synology server

    Hello, I just upgrade my itunes 10 to 11. And I cannot anymore acces my music files on my Synology server (DSM4.1 last update done), but only the video files. I need help !

  • ACR Rating doesn't shown up in Bridge

    Hey guys, after reinstalling the CS3 Mastercoll. I have several problems I couldn't solve. First: I open a bunch of photos(Nikon D700, NEF codec installed)  in ACR 6.4. After some color and rating changings I close ACR. Next step would be, open the r

  • External start up disk stalling on gray apple

    hello, I have sold my mac pro and have placed my start up disk in an external case. I then connected the case, via Firewire 800, to a 2011 macbook pro and restarted the macbook pro holding the alt key. when the "choose start up volume" screen appears

  • IPhoto has jpgs with no info

    Have found some corrupted or truncated jpgs in iPhoto. Cannot attach in Mail or move to desktop. I can duplicate and put on desktop but not import. Cannot export them. Photoshop and Pixelmator cannot open the files either. Have trashed a few that I h