DOCUMENT DATE FOR BACK DATA TO BE UPLOADED

hello experts,
I have a purchase cube with TD from last year.
I have PO NUMBER as a characteristic AND 2LIS_02_SCL as datasource for Purchase.
now i want the date on which PO is created and there is a DOCUMENT DATE available in the datasource.
now i have added 0Doc_date as a characterstic to the cube but the date is coming in the report instead # is appearing in Doc date column.
when i am checking the data in PSA the document date value is available correctly .
Now please help to load the document data for the back data.
how to do this so that the Transaction data should not get affected and i get the Document date for the previous PO numbers also.
plz explain in details.
will be very thankful

Hi,
Delete one Request of that data source from Cube
and then Reconstruct it.
check whether the data for document date is coming or not
if it is coming then delete all the request and reconstruct.
it will reflect after that.
Edited by: obaid shaikh on Mar 17, 2011 1:47 PM

Similar Messages

  • Not to allow to save the billing document in the back date if FI period clo

    Can anybody help me please. My FI period for october is closed. But there are still some open delivery documents in billing due list for October. So now let say on 7th december if i will make the billing documents by giving the date 30.11.2007 logically as FI period is closed it will allow to save the billing document but will not pass to accounting.
    So this normaly happening in my industry endusers are creating the billing documents in the back date even after the period close so all the documents come into VFX3.
    So can any body help me? Can we have any check if the FI period is closed for a perticular months if we will try to make the billing documents in the closed period it will not allow to generate the billing document rather saving the documents and blocking for accounting.
    Reggards.
    Laxmikanta Das.
    09958119889.

    Dear laxmikanta
    I also not come across the situation what you said and hence dont have much idea about authorization group.
    However, on going through the F1 help, it says
    <b>Procedure</b>
        If only a limited set of users is to be able to post in a particular
        posting period, proceed as follows:
                 o   Add the posting period authorization (authorization object
            F_BKPF_BUP) to the authorizations of the selected users. Assign an
            authorization group (e.g. '0001').
                  o   Enter the account type '+' for the posting period variant to which
            the restriction is to apply. Enter the period(s) whose use is to be
            restricted in the first period, those which are available to all
            users in the second period, and the authorization group (e.g.
            '0001') in the last column.
    <b>Examples</b>
        A posting period can be successively restricted. If, e.g. 10 users have
        the posting period authorization with authorization group '0001', and 3
        of these 10 users also with authorization group '0002'.
        If the period is only to be accessible to the 10 selected users the
        authorization group '0001' is entered in the posting period variant.
        Access can later be restricted to the remaining 3 users by entering
        '0002'.
    From the above, I feel your requirement can be met.
    Thanks
    G. Lakshmipathi

  • Powerpivot for sharepoint error: Unable to refresh data for a data connection in the workbook

    Hello,
     I have three errors when i try to use a simple powerpivot workbook published in sharepoint: (nothing on google has help me..)
    1-Unable to refresh data for a data connection in the workbook.
    Try again or contact your system administrator. The following connections failed to refresh:
    PowerPivot Data
    2-The embedded PowerPivot data in the workbook cannot be loaded due to a version mismatch
    3-01/21/2012 17:26:47.08  w3wp.exe (0x1950)                        0x0AD0 Excel Services Application     Excel
    Calculation Services     bccc Medium   Session.HandleTrimmedWorkbookReloading: userOperation ApplySlicerSelectionOperation requires BaseWorkbook: "http://crm2011:2020/Marketing%20Reports/test2_excel32bits.xlsx"
    [0x409] [Saturday, 21 January 2012 09:40:18] [BaseWB ID: 2] to be untrimmed if it is currently trimmed. The workbook is currently NOT trimmed. fb614a65-e398-4b97-a98d-fb7b23eab39f
    01/21/2012 17:26:47.08  w3wp.exe (0x1950)                        0x0AD0 Excel Services Application     Excel Calculation
    Services     f1va Medium   CWorkbookWrapper::CWorkbookWrapper: Created with ID=4 fb614a65-e398-4b97-a98d-fb7b23eab39f
    01/21/2012 17:26:47.09  w3wp.exe (0x1950)                        0x0AD0 Excel Services Application     Excel Calculation
    Services     eq3r Medium   ConnectionRequest.ConnectionRequest: New connection request. SessionId=1.V21.4PI+fCwIq52LH++nOoMzs90.5.en-US5.en-US73.-0060#0000-10-00-05T03:00:00:0000#+0000#0000-03-00-05T02:00:00:0000#-006036.bfceb31b-7122-46ca-9e2a-ae52cefcfcaf1.N,
    WorkbookVersion=ConnectionInfo.WorkbookVersion: Uri=http://crm2011:2020/Marketing Reports/test2_excel32bits.xlsx, Version=Saturday, 21 January 2012 09:40:18 fb614a65-e398-4b97-a98d-fb7b23eab39f
    01/21/2012 17:26:47.12  w3wp.exe (0x1950)                        0x0AD0 Excel Services Application     Excel Calculation
    Services     aysl Medium   Succeeded to initialize a chart. fb614a65-e398-4b97-a98d-fb7b23eab39f
    01/21/2012 17:26:47.12  w3wp.exe (0x1950)                        0x0AD0 Excel Services Application     Excel Calculation
    Services     8xk9 Medium   ExternalSource.ExecuteOperation: We exhausted all available connection information. Exception: Microsoft.Office.Excel.Server.CalculationServer.Interop.ConnectionInfoException: Exception of type
    'Microsoft.Office.Excel.Server.CalculationServer.Interop.ConnectionInfoException' was thrown.     at Microsoft.Office.Excel.Server.CalculationServer.ConnectionInfoManager.GetConnectionInfo(Request request, String externalSourceName, Int32
    externalSourceIndex, Boolean& shouldReportFailure)     at Microsoft.Office.Excel.Server.CalculationServer.ExternalSource.ExecuteOperation(Request request, ExternalSourceStateInfo externalSourceStateInfo, ExternalSourceStateInfo prevExternalSourceStateInfo,
    Int32 index, ConnectionInfoManager connectionInfoManager, ExternalDataScenario scenario, DataOperation dataOpe... fb614a65-e398-4b97-a98d-fb7b23eab39f
    01/21/2012 17:26:47.12* w3wp.exe (0x1950)                        0x0AD0 Excel Services Application     Excel Calculation
    Services     8xk9 Medium   ...ration, Boolean verifyPreOperationConnection), Data Connection Name: PowerPivot Data, SessionId: 1.V21.4PI+fCwIq52LH++nOoMzs90.5.en-US5.en-US73.-0060#0000-10-00-05T03:00:00:0000#+0000#0000-03-00-05T02:00:00:0000#-006036.bfceb31b-7122-46ca-9e2a-ae52cefcfcaf1.N,
    UserId: 0#.w|contoso\manager fb614a65-e398-4b97-a98d-fb7b23eab39f
    My server and client olap versions are the same: MSOLAP.5, i used sql server 2008 R2 SP1 and sharepoint 2010 SP1 and reboot or iisreset have no effect
    Thanks in advance for your help

    Hello Challen Fu
    I would be so grateful if you could please help me out
    I have been trying to find a solution to the same error message
    In my case, the power pivot reports were working before on a regular team  site , but then two things changed:
    a)  I  created a toplevel site using the BI Center template. Now I am using a Business Intelligence template , created a power pivot gallery library and uploaded a few powerpivot reports
    b)  On the  backend, the database instance was upgrated to SQL Server 2012 
         Front end Server VDSP01  remains  SQL Server 2008 R 2 where Sharepoint 2010  was installed as a FARM  
    Now, the reports will display in sharepoing however they will not refresh. the error message i get is the same.
     Scenario recap:
    a- Server VDSP01  uses SQL Server 2008 R 2 where Sharepoint 2010  was installed as a FARM
    b- On the back end,  the database instance name was replaced with SQL 2012 Server:
               from SQL Server 2008 R 2 (instance DBDEV-COTS\COTS)
               to     SQL Server 2012 ( instance VTSQL01\COTS)
    c-  I was told that:
         From VDSP01, they ran
    CliConfg.exe   to create SQL Server Alias :
           where    BEFORE: vdsharepoint -->  DBDEV-COTS\COTS
                and  AFTER    : vdsharepoint -->  VTSQL01\COTS
     I appreciate in advance any help you can provide<v:shapetype coordsize="21600,21600" filled="f" id="_x0000_t75" o:preferrelative="t" o:spt="75" path="m@4@5l@4@11@9@11@9@5xe" stroked="f">
      <v:stroke joinstyle="miter">
      <v:formulas>  <v:f eqn="if lineDrawn pixelLineWidth 0">
      <v:f eqn="sum @0 1 0">
      <v:f eqn="sum 0 0 @1">
      <v:f eqn="prod @2 1 2">
      <v:f eqn="prod @3 21600 pixelWidth">
      <v:f eqn="prod @3 21600 pixelHeight">
      <v:f eqn="sum @0 0 1">
      <v:f eqn="prod @6 1 2">
      <v:f eqn="prod @7 21600 pixelWidth">
      <v:f eqn="sum @8 21600 0">
      <v:f eqn="prod @7 21600 pixelHeight">
     <v:f eqn="sum @10 21600 0">
    </v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:f></v:formulas>
     <v:path gradientshapeok="t" o:connecttype="rect" o:extrusionok="f">
    <o:lock aspectratio="t" v:ext="edit">
    </o:lock></v:path></v:stroke></v:shapetype> <v:shape alt="" id="Picture_x0020_2" o:spid="_x0000_i1025" style="width:630pt;height:475.5pt;" type="#_x0000_t75">
    <v:imagedata o:href="cid:[email protected]" src="file:///C:\Users\wlarange\AppData\Local\Temp\msohtmlclip1\01\clip_image001.jpg">
    </v:imagedata></v:shape>
    wanda larangeira

  • There is no source data for this data record, Message FZ205

    Hi Experts,
    I am facing a problem with the DME File download. This problem happened all of sudden in our production system since last month and it was never before. Our system landscape has also not been changed but as per our basis consultant he has added two-three more new application server to the Production client. Even we do not have this problem in our testing clients.
    Please note that we have been using the output medium '1' from the day one and thus the system has been generating the DME in 'File System' which we download on the desktop and upload the same to the bank online. After running the payment run when we trying to download the DME File, the system gives the error "There is no source data for this data record, Message FZ205".
    I tried to fix this issue through many ways but not able to. So can you please let me know the reason of this error and solution to fix this.
    With best regards,
    BABA

    Hi Shailesh,
    Please share how you solved this problem.
    Many Thanks,
    Lakshmi

  • Restrict User to access/fetch data in back date

    Hi All,
    As financial year is about to close, one of my clients has requirement that for all t-codes, user must be having restriction to access or fetch the data in back date or till 31st March, he won't be able to access it. So my ques is can we do it at authorization level or from basis end.?
    If no, then how it would be possible?
    But they want only 3 users out of all should have access.
    Guida me for the same.
    Regards
    Disha

    Hi ,
    As per standard behaviour of the system you wount be able to achieve what you are looking for .
    As the system does not allow user to be restricted based on period for reports .
    You can check for either BADI or exits / Enhancement points for restricting but that too based on which t codes you want to restrict . It would be T code based like you identify the T codes and accordingly make the enhancements .
    Cheers ,
    Dewang

  • Baseline Date for Due Date Calculation required for S/L indicators

    Hello,
    Iu2019m facing the following problem
    Iu2019m trying to insert an invoice (using a customer master record) but the system blocks me because the field ZFBDT (Baseline Date for Due Date Calculation) is a Required entry.
    The strange thing is  that if I use a S/L indicators the field is mandatory, if not the date may be defaulted.
    I checked in OB41 but there are non differences between Special PK and normal PK.
    Any ideas?
    Thanks in advance
    Alberto

    Dear Alberto,
    the field "Due On (BSEG-ZFBDT)" cannot be controlled with field status.
    It is controlled by field attribute of screen painter (Tcd: SE51).  If
    you look at element attribute for "Due On" field, a flag for required
    entry is activated.  In this case, field status has no control over
    the field.
    As of release 3.1G, field BSEG-ZFBDT is hardcoded in most FI screens
    to be mandatory and cannot be influenced by any field status
    changes. This situation is only valid when posting with special G/L
    indicator (ie PK 29/39).
    SAP development team has determined that this is a critical field.  The
    reason behind this is that this special GL screen and the data entered
    here are very important to many other programs. This data affects
    liabilities and receivables where due date is necessary almost
    everytime. Thus, we changed this field in this screen in order to
    prevent problems in many other areas. The reason is explained further
    in note 95079.
    I hope this helps You.
    mauri

  • How to get current date for posting date

    hi,
    how to get current date for posting date ? any sample code ?
    Thanks

    Hi......
    Use
    Select getdate()
    for current date.......
    Regards,
    Rahul

  • Not allow to save the billing document in the back date if FI period closed

    Hi ,
    We have a requirment in which end user should not be able to create the billing document if the billing date lies in FI posting period which has been closed.
    As of now the users are being able to create the billing documents even if the Billing date lies in the closed posting period but the sccounting document does not gets generated.
    Now we want to block the billing document creation as well.
    Please provide some pointers to the above scenaio. Thanks..

    hello,
    Has your query been solved.
    if not , here is the solution.
    go to the program RV60AFZZ. This is the exit for billing.
    In that there is a form called
    FORM userexit_number_range USING us_range_intern.
    in the form you have to write the code.
    variables to pass for the function module.
    DATA: lv_buper  LIKE  t009b-poper,
            lv_gjahr  LIKE  t009b-bdatj.
    CALL FUNCTION 'DATE_TO_PERIOD_CONVERT'
          EXPORTING
            i_date               = syst-datum
            i_periv              = 'V3'
         IMPORTING
           e_buper              = lv_buper
           e_gjahr              = lv_gjahr
    in lv_buper you will get the last posting period which is open ie the posting period month.
    based on lv_buper you get , you can write your validations.
    You have to remember that the posting period which u get is based on the fiscal year.
    hope that helps.
    reward if useful.
    cheers,
    Ravi Kiran.

  • TimeMachine randomly ignores data for back-up

    For some time I am experiencing following problem: Time Machine will not back up all changes. It happend just this morning again. I put some new data in one folder. Clicked Back-Up now in time machine preferences but the new data is not backed up. This problem: time machine not backing up data, started some time, even prior the update to snow leopard. BTW: It is snot always the same folder or data affected. I have tried a lot of things, like several times full reset of time machine, erasing the external FW drive used for back-up and after that performing full back-up. However, nothing solved the problem. For some time everything looks fine but then the problem, that time machine will not back-up some new added data, starts again. I performed several checks of my Macintosh HD and external drive and both: DiskUtility and TechTool say drives are OK. Any suggestion ?
    Message was edited by: miles123
    Message was edited by: miles123

    miles123 wrote:
    Seen but nothing related to my problem there. TimeMachine Buddy says back-up completed successfully but I can not see the data in my back up. And as already mentioned it happens randomly and it is not always teh same folder affected.
    How much does TM Buddy say was copied? If it says something silly like 8 or 9 files for 93 bytes, try a Restart. That should fix it. There is a known problem for some users on Snow Leopard, where doing a +*Verify Disk+* (not permissions) on the boot drive, using the copy of DU on the boot drive, somehow prevents TM from finding changes. Restart fixes it (and earlier changes will then be saved).
    If it is copying some things correctly but not others, it may be a corrupted file in your home folder. Try making a test user, running a backup, and seeing if that user's data gets saved properly. If so, post back and we'll see if we can fix it.

  • Historic and Current data for Master data bearing objects

    Hi All,
    We are trying to implement type 2 dimensions for all the master data bearing characteristics, where we require historic and current data available for reporting. the master data can have a number of attributes and all of them can be time dependent. We are not getting any 'datefrom' or 'dateto' from the source system.
    For example:
    For Example: The table below shows data entering BI at different dates.
    Source Data day of entering BI
    MasterID ATTR1 ATTR2
    123506 Y REWAR day1
    123506 N REWAR day4
    123506 Y ADJUST day4
    123506 N ADJUST dayn
    The field 'day of entry into BI' is only for your understanding; we do not get any date fields from the source. SID is the field we are generating for uniqueness. It is a counter. EFF_DATE would be the current date for all the data. EXP_DATE would be 31.12.9999 until the attributes change. SID and MasterID together would be the key.
    On day 1 the following data enters BI,
    day 1
    SID MasterID ATTR1 ATTR2 EFF_DATE EXP_DATE
    1 123506 Y REWAR 2/10/2009 12/31/9999
    On day 4, 2 data records enter with same PID,
    SID MasterID ATTR1 ATTR2 EFF_DATE EXP_DATE
    2 123506 N REWAR 2/13/2009 12/31/9999
    3 123506 Y ADJUST 2/13/2009 12/31/9999
    the EXP_DATE of the record of day 1 needs to be changed to current date.
    Also there are two records entering, so latest record would have EXP_DATE as 31.12.9999. And the EXP_DATE of the first record on day 4 should change to the current date.
    so the following changes should happen,
    CHANGE
    SID MasterIDATTR1 ATTR2 EFF_DATE EXP_DATE
    1 123506 Y REWAR 2/10/2009 2/13/2009
    CHANGE
    SID MasterID ATTR1 ATTR2 EFF_DATE EXP_DATE
    3 123506 Y ADJUST 2/13/2009 2/22/2009
    On day n, one data record enters with same PID,
    SID MasterID ATTR1 ATTR2 EFF_DATE EXP_DATE
    4 123506 N ADJUST 2/22/2009 12/31/9999
    The change is ,
    CHANGE
    SID MasterID ATTR1 ATTR2 EFF_DATE EXP_DATE
    3 123506 Y ADJUST 2/13/2009 2/22/2009
    The data expected in P-table is as below, on Day n or after Day n, untill any other record enters for this MasterID,
    1 123506 Y REWAR 2/10/2009 2/13/2009
    2 123506 N REWAR 2/13/2009 2/13/2009
    3 123506 Y ADJUST 2/13/2009 2/22/2009
    4 123506 N ADJUST 2/22/2009 12/31/9999
    Has anyone worked on type 2 dimensions earlier? Or any ideas to implement this logic would be appreciated.
    Regards,
    Sudeepti

    Compound the Master ID with eff date and other attribute as superior objects
    so you will get P-table as
    ATTR1   ATTR2   MAT ID  
    1 2/10/2009 2/13/2009 123506 Y REWAR
    2 2/13/2009 2/13/2009 123506 N REWAR
    3 2/13/2009 2/22/2009 123506 Y ADJUST
    4 2/22/2009 12/31/9999  123506 N ADJUST

  • How to see data for particular date from a alert log file

    Hi Experts,
    I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
    Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
    Thanks
    Shaan

    Hi Jaffar,
    Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
    tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
    It's not working. Here is the sample log file
    Mon Nov 26 21:42:43 2007
    Thread 1 advanced to log sequence 138
    Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Mon Nov 26 21:42:43 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 137
    Mon Nov 26 21:42:43 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 137
    ARC1: Unable to archive log 1 thread 1 sequence 137
    Log actively being archived by another process
    Mon Nov 26 21:42:43 2007
    ARCH: Beginning to archive log 1 thread 1 sequence 137
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
    .dbf'
    ARCH: Completed archiving log 1 thread 1 sequence 137
    Mon Nov 26 21:42:44 2007
    Thread 1 advanced to log sequence 139
    Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Mon Nov 26 21:42:44 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 138
    ARC0: Beginning to archive log 3 thread 1 sequence 138
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
    .dbf'
    Mon Nov 26 21:42:44 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 138
    ARCH: Unable to archive log 3 thread 1 sequence 138
    Log actively being archived by another process
    Mon Nov 26 21:42:45 2007
    ARC0: Completed archiving log 3 thread 1 sequence 138
    Mon Nov 26 21:45:12 2007
    Starting control autobackup
    Mon Nov 26 21:45:56 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0033'
    handle 'c-2861328927-20071126-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Tue Nov 27 21:23:50 2007
    Starting control autobackup
    Tue Nov 27 21:30:49 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071127-00'
    Tue Nov 27 21:30:57 2007
    ARC1: Evaluating archive log 2 thread 1 sequence 139
    ARC1: Beginning to archive log 2 thread 1 sequence 139
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
    .dbf'
    Tue Nov 27 21:30:57 2007
    Thread 1 advanced to log sequence 140
    Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Tue Nov 27 21:30:57 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 139
    ARCH: Unable to archive log 2 thread 1 sequence 139
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARC1: Completed archiving log 2 thread 1 sequence 139
    Tue Nov 27 21:30:58 2007
    Thread 1 advanced to log sequence 141
    Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Tue Nov 27 21:30:58 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 140
    ARCH: Beginning to archive log 1 thread 1 sequence 140
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
    .dbf'
    Tue Nov 27 21:30:58 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 140
    ARC1: Unable to archive log 1 thread 1 sequence 140
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARCH: Completed archiving log 1 thread 1 sequence 140
    Tue Nov 27 21:33:16 2007
    Starting control autobackup
    Tue Nov 27 21:34:29 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0205'
    handle 'c-2861328927-20071127-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Wed Nov 28 21:43:31 2007
    Starting control autobackup
    Wed Nov 28 21:43:59 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-00'
    Wed Nov 28 21:44:08 2007
    Thread 1 advanced to log sequence 142
    Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Wed Nov 28 21:44:08 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 141
    ARCH: Beginning to archive log 3 thread 1 sequence 141
    Wed Nov 28 21:44:08 2007
    ARC1: Evaluating archive log 3 thread 1 sequence 141
    ARC1: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
    .dbf'
    Wed Nov 28 21:44:08 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 141
    ARC0: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    ARCH: Completed archiving log 3 thread 1 sequence 141
    Wed Nov 28 21:44:09 2007
    Thread 1 advanced to log sequence 143
    Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Wed Nov 28 21:44:09 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 142
    ARCH: Beginning to archive log 2 thread 1 sequence 142
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
    .dbf'
    Wed Nov 28 21:44:09 2007
    ARC0: Evaluating archive log 2 thread 1 sequence 142
    ARC0: Unable to archive log 2 thread 1 sequence 142
    Log actively being archived by another process
    Wed Nov 28 21:44:09 2007
    ARCH: Completed archiving log 2 thread 1 sequence 142
    Wed Nov 28 21:44:36 2007
    Starting control autobackup
    Wed Nov 28 21:45:00 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Thu Nov 29 21:36:44 2007
    Starting control autobackup
    Thu Nov 29 21:42:53 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0206'
    handle 'c-2861328927-20071129-00'
    Thu Nov 29 21:43:01 2007
    Thread 1 advanced to log sequence 144
    Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Thu Nov 29 21:43:01 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 143
    ARCH: Beginning to archive log 1 thread 1 sequence 143
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
    .dbf'
    Thu Nov 29 21:43:01 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 143
    ARC1: Unable to archive log 1 thread 1 sequence 143
    Log actively being archived by another process
    Thu Nov 29 21:43:02 2007
    ARCH: Completed archiving log 1 thread 1 sequence 143
    Thu Nov 29 21:43:03 2007
    Thread 1 advanced to log sequence 145
    Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Thu Nov 29 21:43:03 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 144
    ARCH: Beginning to archive log 3 thread 1 sequence 144
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
    .dbf'
    Thu Nov 29 21:43:03 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 144
    ARC0: Unable to archive log 3 thread 1 sequence 144
    Log actively being archived by another process
    Thu Nov 29 21:43:03 2007
    ARCH: Completed archiving log 3 thread 1 sequence 144
    Thu Nov 29 21:49:00 2007
    Starting control autobackup
    Thu Nov 29 21:50:14 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071129-01'
    Thanks
    Shaan

  • Discrepency while REPLICATING META DATA (for new DATA SOURCE) in BI side.

    In R/3 I have created a simple TRANSACTIONAL data store based on an INFO QUERY.
    I even checked the veracity of this data store using RSA3 to see whether it extracts data. Works perfectly.
    I go to BW side. Click on the SAP module under which I have created the data source, right click, select 'REPLICATE' and click on it.
    ( I would love to post the screen shot here, but I think I may not be able to paste BMP files here).
    I will write the contents of the POP-UP that appears,
    Title:Data Source from Source System Unknown
    Pop-up contents:
    Data Source (OSOA) DS_01
    does not exist in BI system
    How do you want to create the object in BI?
    1. as DataSource (RSDS)
    2. as 3.x DataSource (ISFS)
    3. this and following 3 as DataSource (RSDS)
    4. this and following 3 as 3.x DataSource (ISFS).
    Well, I normally select option three as per my instructions (without knowing the real reason).
    But sometimes, either for the same data source or for another data sources that I created, the same pop up will appear like this
    Title:Data Source from Source System Unknown
    Pop-up contents:
    Data Source (OSOA) DS_01
    does not exist in BI system
    How do you want to create the object in BI?
    1. as DataSource (RSDS)
    2. as 3.x DataSource (ISFS)
    Just TWO options.
    And, if I select option 1, the data source does not work properly on BI side, though it worked perfectly on R/3 under TRANSACTION RSA3 and showed me data.
    For some unknown reasons, if I delete the erroneous datasource on BI side and sleep overnight and comeback in the morning and replicate, the POP-UP sometimes appears with FOUR options, (Notice the word 'SOMETIMES')
    Can someone explain the secret behind this?
    Thanks again in advance,
    Gold

    3. this and following 3 as DataSource (RSDS)
    That means there are total 3 new (not yet in BI) DataSources available, u wanted to replicate as 7.0 datasource (RSRS) or 3.x datasource (ISFS).
    (other 2 datasources activated from RSA5, or created by other users under that SAP module)
    If there is 1 new DataSource, u will  get just TWO options.
    1. as DataSource (RSDS)
    2. as 3.x DataSource (ISFS)
    After replication with option 1, u should activate datasource in BI, then create infopackages, transformation, DTP etc.

  • Using a range of dates for Key Date

    In a HR Bi data warehouse, we have a position-to-position hierarchy, where each of the nodes are time dependent. So, it shows for each node,  valid from and valid to dates, and all the employees who are reporting to that position. This hierarchy is built on the infoobject 0HRPOSITION, which is maintained in R/3 and extracted to BI.
    Let us take an example: Position 1000 is valid from 1-1-2006 to 6-30-2006 Employees reporting to this position are A,B,C,D
                                           Position 1000 is valid from 7-1-2006 to 12-31-9999 Employees reporting to this position are A,E,F,G
    When a user chooses the position 1000, and date range 1-1-2006 to 12-31-2006, it show the complete list of employees as
    A,B,C,D,E,F,G.
    Because the Keydate can only be a single value, and it is automatically taking today's date, and pulling the nodes based on that.
    I have created a hierarchy node variable on the 0HRPOSITION infoObject, and entered the value 1000, with no value for the keydate.
    The system is simply showing employees, A,E,F and G. That is my problem
    My requirement is this: I like to be able to give a date range, (for the hierarchy)  say from 1-1-2006 to 12/31/2006 and get the complete list of Employees, which is A,B,C,D,E,F,G.
    Is this possible? Can I change the way this hierarchy is defined so that I can pull the possible values for a range?

    Thank you Ajay.
    After some thinking, I have realized that these options will not work.
    We have a position-to-position hierarchy that shows who reports to who in the organization. This hierarchy is built on the Infoobject 0HRPOSITION.  Each node in this hierarchy has is time-dependent. Note that, the entire hierarchy is not timedependent. Only the individual position nodes are time-dependent.
    This 0HRPOSITION infoobject exists in the  Heacount cube as one of the characteristics. Here is my requirement.
    1. I want to show in a report, all the employees (directly or indirectly) reporting to a manager for a period of say, 1 year?
    I know that I can specify a key date for the hierarchy 0HRPOSITION, then the report will show all the employees (direct and indirect) reporting to a position say 6/30/2008. I don't want this for a specific date, I want to get  ALL the employees (direct and indirect) reporting to a position in a range of dates( say 1 year)
    Does that make sense? How do we achieve this goal?

  • Which Date for Sales Date in Demantra

    Hi Experts,
    For 'Shipment History – requested items –shipped date' option for collecting actual quantity, which date is picked from oe_order_lines_all table.
    I assume it is Actual_shipment_date . Initially I was thinking of scheduled_arrival_date
    or is it something else??
    If we are on weekly bucket, there will be some date parsing for Monday date ( day of week start)
    I assume this
    SALES_DATE = trunc(NEXT_DAY(Oe_order_lines_all.actual_shipment_date - 7 ,'MON'))
    Is my assumption correct?
    Thanks

    Hi MJ,
    Thanks for clarification
    If I use this for date compare, I should be getting correct dates right in my sql??
    SALES_DATE = trunc(NEXT_DAY(Oe_order_lines_all.actual_shipment_date - 7 ,'MON'))
    Thanks

  • Export data for domain data make wrong file

    Hi!
    If I try export data from table with column with type such as MDSYS.SDO_GEOMETRY in SQL developer (1.2.0 and 1.2.1.3213 both) result file will be with information like (for insert clause):
    Insert into table_name (NUMB,GEOLOC) values (500949,'MDSYS.SDO_GEOMETRY');.
    Also in previous version (1.2.0) when this column was shown in data window it was more informative:
    MDSYS.SDO_GEOMETRY(2006, 262148, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,2,1,95,2,1,109,2,1,133,2,1,157,2,1), MDSYS.SDO_ORDINATE_ARRAY(22847.57591,7216.21100000001,22842.04691,7217.2571,22841.44841,7218.00440000001,22843.39211,7228.31675000001,22844.13881,7232.35205000001,22845.63335,7239.52580000001,22845.63335,7240.27310000001,22845.03599,7240.72145000001,22826.05499,7244.15885000001,22814.39735,7246.10180000001,22809.01769,7246.84910000001,22807.67249,7246.40075000001,22802.44103,7222.33850000001,22799.19203,7213.03505000001,22795.8656525,7208.73815000001,22794.81386,7208.73200000001,22789.47752,7208.70080000001,22784.3570675,7209.03725000001,22758.6899675,7184.04095000001,22757.3447675,7183.59260000001,22751.9645375,7183.59245000001,22744.006055,7183.03205000001,22743.258785,7181.83640000001,22737.1684775,7181.35070000001,22736.7201725,7182.69575,22729.546295,7183.59245000001,22726.7066975,7186.58165000001,22725.9594275,7186.73105000001,22725.2121575,7186.43210000001,22723.11983,7184.56400000001,22722.29789,7184.48915000001,22721.55062,7186.28270000001,22721.326325,7186.80575000001,22717.515305,7191.36410000001,22715.7218,7193.68070000001,22710.1920875,7200.48080000001,22709.4448175,7206.90740000001,22709.370005,7214.15585000001,22709.74364,7214.52950000001,22711.6866275,7215.35150000001,22711.83611,7216.84610000001,22711.98545,7220.05925000001,22711.611815,7236.12560000001,22711.3876625,7247.63360000001,22711.4249975,7249.76345000001,22710.7523975,7250.95910000001,22710.0051275,7252.45355000001,22849.96763,7244.45780000001,22848.8559875,7243.04300000001,22848.32375,7242.36545000001,22849.51961,7243.41155000001,22848.8559875,7243.04300000001,22846.82921,7241.91710000001,22826.05499,7244.15885000001,22263.062285,7163.22935000001,22263.809555,7173.01865000001,22265.67773,7194.61475000001,22265.902025,7196.78180000001,22265.902025,7197.23015000001,22265.8272125,7197.37970000001,22265.304095,7197.97745000001,22217.9272625,7201.19075,22217.1799925,7201.56440000001,22216.8063575,7202.31170000001,22216.35791,7204.47875000001,22216.731545,7206.12275000001,22800.2381225,7220.28350000001,22798.3699475,7214.23070000001,22796.651255,7211.31620000001,22795.3061975,7209.82175000001,22794.9325625,7209.22385000001,22794.81386,7208.73200000001,22785.5170175,7170.21620000001,22777.3717175,7133.0768,22776.9234125,7130.76035000001,22775.727695,7125.90305000001,22774.6816025,7120.82150000001,22773.7100375,7115.81480000001,22774.53212,7109.98610000001,22774.4573075,7110.73340000001,22773.2617325,7111.70480000001,22773.1870625,7112.45210000001,22773.7100375,7115.81480000001,22773.11225,7113.87185000001,22767.95603,7108.93985000001))
    when new one:
    MDSYS.SDO_GEOMETRY
    WBR,
    Sergey

    I'm newbie here and not sure what you want exactly but.
    First of all I've created table on Oracle 10G (10.2.0.3) Enterprise ed as follow:
    CREATE TABLE tblnm
         "MI_PRINX" NUMBER(11,0),
         "GEOLOC" MDSYS.SDO_GEOMETRY,
    CONSTRAINT RP_MAP_PK PRIMARY KEY (MI_PRINX)
    INSERT INTO USER_SDO_GEOM_METADATA (TABLE_NAME, COLUMN_NAME, DIMINFO, SRID)
    VALUES ('tblnm','GEOLOC',MDSYS.SDO_DIM_ARRAY(mdsys.sdo_dim_element('X', -100000.0, 185000.0, 1.425E-5), mdsys.sdo_dim_element('Y', -100000.0, 200000.0, 1.5E-5)),262148);
    CREATE INDEX tblnm_SX ON tblnm (GEOLOC)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    insert into tblnm (MI_PRINX,GEOLOC) VALUES
    (1,MDSYS.SDO_GEOMETRY(2001, 262148, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1,1), MDSYS.SDO_ORDINATE_ARRAY(6946.74932,9604.25675000001)));
    After that I've export data from this table by SQLDeveloper:
    as insert clause result was
    -- INSERTING into TBLNM
    Insert into TBLNM (MI_PRINX,GEOLOC) values (1,'MDSYS.SDO_GEOMETRY');
    when I've try to import data (after delete) by this command i've got:
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected MDSYS.SDO_GEOMETRY got CHAR
    for loader clause file looks like
    LOAD DATA
    INFILE *
    Truncate
    INTO TABLE "TBLNM"
    FIELDS TERMINATED BY ','
    TRAILING NULLCOLS
    (MI_PRINX,
    GEOLOC)
    begindata
    "1","MDSYS.SDO_GEOMETRY"
    and so one. Result file doesn't consist data for sdo_geometry column - only name of class.

Maybe you are looking for