Manual data corrections in data-Warehouse/OLAP

Hi to all,
is it somewhere in essbase(hyperion) possible Manual data corrections in data-Warehouse/OLAP?
Thank you
G.

Hi NareshV,
thanks for your reply. In fact, I have also some difficulties tu understand (I am not autor) this question;-), but ok - lets say u have some records of data retrieved from essbase (or Hyperion) like for instance an open excel sheet with data. In excel is possible to delete or override any value in any column, is it possible to do this ia essbase olap server?
Thank, G.

Similar Messages

  • How to correct the data in the psa table?

    1Q. There are lot of invalid character in the infopackage of say 1million records. it takes lot of time to check each and every record in the data package(PSA)and correct it. i think there is an efficient way to reslove this issue that is going in the PSA table to correct all the records. is it right, if yes how to do it?
    2Q. If say there are 30 data packages in the request and only data pacakge 25 has the bad records. if i correct the data in the PSA and push it to the data target, its gone process all the data packages one by one that takes lot of time and delay our process chain job that has depedency on the load. can i just manually process this data package only. if yes how to do it?
    3Q. when i successfully correct all the bad records in the data package and push it from the PSA. the request dont turn to status green and have to manually turn this request to green in the data target after i verify all the data packages have on bad records and it is a delta update. is my process right? as it is a delta what are the pitfalls i have to watch for? and the next step after this is compress the request this is very dangerous because this basic cube have lot of history and it will take a long time probably weeks to reload it. how to take precuation before i turn it to stutus green in the data target?
    Thanks in advance! and i know how to thank SDN experts by assining points.

    Hi,
    1Q . Update the invalid chars in the filter table using tcode RSKC and also write a ABAP routine to filter out the invalid characters.
    2Q. For the incorrect data packet, you can right click on the data packet in the monitor details tab and say update manually. That way you don't need to reload the entire request again.
    3Q. When you reload the request or update individual data packet again, the request should automatically turn green. You don't have to turn it green manually. The pitfall is, if you turn a delta request green, you have chances of losing data and corrupting the delta. Best practise is never turn a request green manually.. Even if you compress the requests, you can use selective deletion to delete the data and then use an infopackage with the same selections, that you used for deletion to load the same data back.
    Cheers,
    Kedar

  • How to trigger provisioning - after approval task failed but data corrected

    Hi,
    We have a situation where the approvals are provided for the requests but approval task failed(denied) due to misconfiguration. But really all the approvals are provided by the required personnel. So, we need to manually correct the data and trigger the provisioning. Could you please advise on how to manually/otherwise trigger the provisioning in this scenario
    Thanks,
    Sudhakar

    Thanks Kevin. We could locate these requests and update the records in DB. But wasn't sure if we can trigger the automatic provisioning from there. I was thinking there may be a way to restart the workflow/provisioning from the point where it failed. That would be a nice feature to have.

  • Differences between operational systems data modeling and data warehouse da

    Hello Everyone,
    Can anybody help me understand the differences between operational systems data modeling and data warehouse data modeling>
    Thanks

    Hello A S!
    What you mean is the difference between modelling after normal form like in operational systems (OLTP) e. g. 3NF and modelling a InfoCube in a data warehouse (OLAP)?
    While in a OLTP you want to have data tables free of redundance and ready for transactions meaning writing and reading few records often, in an OLAP-system you need to read a lot of data for every query you do on a database. Often in an OLAP-system you aggregate these amounts of data.
    Therefore you use another principle for these database scheme. This is called star schema. This means that you have one central table (called fact table) which holds the key figures and have keys to another tables with characteristics. These other tables are called dimension tables. They hold combinations of the characteristics. Normally you design it that your dimensions are small, so the access  on the data is more efficent.
    the star scheme in SAP BI is a little more complex than explained here but it follows the same concept.
    Best regards,
    Peter

  • Guide me for any way to avoid manual update for processing data package

    Hi sap gurus,
                         I am new to SAP BI.  In my project everyday I have to manually update for processing data packets for an infocube. Can you tell me why it happens that some time data get stuck and we have to do manual update? Is there any possible solution to avoid or correct that? Can we make some kind of Process chains that automatically do these things?
                         Please reply..as it is very irritating to do same thing everyday..
    -Thanks in advance.

    We have program RSARFCEX, which can execute all of them in one go.
    But you can not schedule it in process chain, as you will never know when TRFC will struck. So it will execute without anything if there is no TRFC struck.
    Pravender

  • Difference between OLAP, Data Mining and Data Warehousing

    Dear Sirs,
    I am new to the above topics, but I know oracle DBA very well. I would like to move into the above field. Hence, can anyone tell me the basic oracle softwaare used for OLAP, Data Mining and Data warehousing and also in brief the difference between these three.
    I would be great helpfull to me.
    Thanks & Regards,
    Manoj Mathew

    Hi Manoj Mathew,
    Check these links to what Oracle has to say about its own software specific for these topics:
    DataMining (tool is Oracle Data Miner): http://www.oracle.com/technology/products/bi/odm/index.html
    Datawarehousing (tool is OWB): http://www.oracle.com/technology/products/warehouse/index.html
    OLAP (tool = Analytic Workspace Manager): http://www.oracle.com/technology/products/bi/olap/olap.html
    Good luck, Patrick

  • HOW CAN U CORRECT THE DATA IN UR FILE WHICH CONTAINS 1 LAKSH RECS

    Hai Frnds,
    i Attend an interview they asked this questions can u know the answeres . tell me .
    In File to file scenario how can we reprocess records which failed records.
    HOW CAN U CORRECT THE DATA IN UR FILE WHICH CONTAINS 1 LAKSH RECS
    Thanks in advance
    thahir

    Hi,
    Refer these links:
    this might help you
    Generic Approach for Validating Incoming Flat File in SAP XI - Part 1
    Generic Approach for Validating Incoming Flat File in SAP XI - Part 1
    validating against schema file for the output XML file
    Informing the sender about bad records
    Regards,
    Nithiyanandam

  • Ensure field sequence is correct for data for mutiple source structure

    Hi,
    I'm using LSMW with IDOC message type 'FIDCC2' Basic type 'FIDCCP02'.
    I'm getting error that packed fields are not permitted.
    I'm getting Ensure field sequence is correct for data for mutiple source structures.
    Source Structures
           HEADER_STRUCT            G/L  Account Document Header
               LINE_STRUCT              G/L Account Document Line
    Source Fields
           HEADER_STRUCT             G/L  Account Document Header
               BKTXT                          C(025)    Document  Header Text
               BLART                          C(002)    Document Type
               BLDAT                          DYMD(008) Document Date
               BUDAT                          DYMD(008) Posting Date
               KURSF                          C(009)    Exchange rate
               WAERS                          C(005)    Currency
               WWERT                          DYMD(008) Translation Date
               XBLNR                          C(016)    Reference
               LINE_STRUCT               G/L Account Document Line
                   AUFNR                          C(012)    Order
                   HKONT                          C(010)    G/L Account
                   KOSTL                          C(010)    Cost Center
                   MEINS                          C(003)    Base Unit of Measure
                   MENGE                          C(013)    Quantity
                   PRCTR                          C(010)    Profit Center
                   SGTXT                          C(050)    Text
                   SHKZG                          C(001)    Debit/Credit Ind.
                   WRBTR                          AMT3(013) Amount
    I have changed PAC3 field for caracters fields of same length to avoid erreur message of no packed fields allowed.
    Structure Relations
           E1FIKPF FI Document Header (BKPF)         <<<< HEADER_STRUCT G/L  Account Document Header
                   Select Target Structure E1FIKPF .
               E1FISEG FI Document Item (BSEG)          <<<< LINE_STRUCT   G/L Account Document Line
                   E1FISE2 FI Document Item, Second Part of E1FISEG   (BSEG)
                   E1FINBU FI Subsidiary Ledger (FI-AP-AR) (BSEG)
               E1FISEC CPD Customer/Vendor  (BSEC)
               E1FISET FI Tax Data (BSET)
               E1FIXWT Extended Withholding Tax (WITH_ITEM)
    Files
           Legacy Data          On the PC (Frontend)
               File to read GL Account info   c:\GL_Account.txt
                                              Data for Multiple Source Structures (Sequential Files)
                                              Separator Tabulator
                                              Field Names at Start of File
                                              Field Order Matches Source Structure Definition
                                              With Record End Indicator (Text File)
                                              Code Page ASCII
           Legacy Data          On the R/3 server (application server)
           Imported Data        File for Imported Data (Application Server)
               Imported Data                  c:\SYNERGO_CREATE_LCNA_FI_GLDOC_CREATE.lsmw.read
           Converted Data       File for Converted Data (Application Server)
               Converted Data                 c:\SYNERGO_LCNA_FI_GLDOC_CREATE.lsmw.conv
           Wildcard Value       Value for Wildcard '*' in File Name
    Source Structures and Files
           HEADER_STRUCT G/L  Account Document Header
                         File to read GL Account info c:\GL_Account.txt
               LINE_STRUCT G/L Account Document Line
                           File to read GL Account info c:\GL_Account.txt
    File content:
    Document  Header Text     Document Type     Document Date     Posting Date     Exchange rate     Currency     Translation Date     Reference     
    G/L Account document     SA     20080401     20080409     1.05     CAD     20080409     Reference     
    Order     G/L Account     Cost Center     Base Unit of Measure     Quantity     Profit Center     Text     Debit/Credit Ind.     Amount
         44000022                    1040     Line item text 1     H     250
         60105M01     13431     TO     10          Line item text 2     S     150
    800000     60105M01                         Line item text 3     S     100
         60110P01     6617     H     40          Line item text 4     S     600
         44000022                    ACIBRAM     Line item text 5     H     600
    The file structure is as follow
    Header titles
    Header info
    Line titles
    Line1 info
    Line2 info
    Line3 info
    Line4 info
    Line5 info
    Could someone direct me in the wright direction?
    Thank you in advance!
    Curtis

    Hi,
    Thank you so much for yout reply.
    For example
    i have VBAK(Heder structure)
              VBAP( Item Structure)
    My file should be like this i think
    Identification content         Fieldnames
         H                               VBELN      ERDAT     ERNAM        
                                          Fieldvalues for header
          H                              1000          20080703   swapna
    Identification content         Fieldnames
        I                                   VBELP     AUART 
                                          Fieldvalues for item
        I                                  001             OR
                                           002             OR
    Is this format is correct.
    Let me know whether i am correct or not

  • Thinkpad T61 reports in-correct EDID data to Xorg

    Thinkpad T61 with Quadro NVS 140M reports in-correct EDID data . Primary issues is that I want to switch to "1024x768" resolution which is well supported for LCD projectors. But currently with default setting nvidia driver it gives only the following resolutions. Is there any BIOS update for this ??
    root#xrandr
    Screen 0: minimum 640 x 480, current 1680 x 1050, maximum 1680 x 1050
    default connected 1680x1050+0+0 0mm x 0mm
       1680x1050      50.0*    51.0     52.0
       1600x1024      53.0
       1280x1024      54.0
       1280x960       55.0
       800x512        56.0
       640x512        57.0
       640x480        58.0
    Here is the snip of Xorg.0.log
    (II) Setting vga for screen 0.
    (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32
    (==) NVIDIA(0): RGB weight 888
    (==) NVIDIA(0): Default visual is TrueColor
    (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
    (**) NVIDIA(0): Option "RenderAccel" "True"
    (**) NVIDIA(0): Option "AddARGBGLXVisuals" "True"
    (**) NVIDIA(0): Enabling RENDER acceleration
    (II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is
    (II) NVIDIA(0):     enabled.
    (II) NVIDIA(0): NVIDIA GPU Quadro NVS 140M (G86GL) at PCI:1:0:0 (GPU-0)
    (--) NVIDIA(0): Memory: 524288 kBytes
    (--) NVIDIA(0): VideoBIOS: 60.86.3e.00.00
    (II) NVIDIA(0): Detected PCI Express Link width: 16X
    (--) NVIDIA(0): Interlaced video modes are supported on this GPU
    (--) NVIDIA(0): Connected display device(s) on Quadro NVS 140M at PCI:1:0:0:
    (--) NVIDIA(0):     IBM (DFP-0)
    (--) NVIDIA(0): IBM (DFP-0): 330.0 MHz maximum pixel clock
    (--) NVIDIA(0): IBM (DFP-0): Internal Dual Link LVDS
    (WW) NVIDIA(0): The EDID for IBM (DFP-0) contradicts itself: mode "1680x1050"
    (WW) NVIDIA(0):     is specified in the EDID; however, the EDID's valid
    (WW) NVIDIA(0):     HorizSync range (53.398-64.075 kHz) would exclude this
    (WW) NVIDIA(0):     mode's HorizSync (42.7 kHz); ignoring HorizSync check for
    (WW) NVIDIA(0):     mode "1680x1050".
    (WW) NVIDIA(0): The EDID for IBM (DFP-0) contradicts itself: mode "1680x1050"
    (WW) NVIDIA(0):     is specified in the EDID; however, the EDID's valid
    (WW) NVIDIA(0):     VertRefresh range (50.000-60.000 Hz) would exclude this
    (WW) NVIDIA(0):     mode's VertRefresh (40.1 Hz); ignoring VertRefresh check
    (WW) NVIDIA(0):     for mode "1680x1050".
    (WW) NVIDIA(0): The EDID for IBM (DFP-0) contradicts itself: mode "1680x1050"
    (WW) NVIDIA(0):     is specified in the EDID; however, the EDID's valid
    (WW) NVIDIA(0):     HorizSync range (53.398-64.075 kHz) would exclude this
    (WW) NVIDIA(0):     mode's HorizSync (42.7 kHz); ignoring HorizSync check for
    (WW) NVIDIA(0):     mode "1680x1050".
    (WW) NVIDIA(0): The EDID for IBM (DFP-0) contradicts itself: mode "1680x1050"
    (WW) NVIDIA(0):     is specified in the EDID; however, the EDID's valid
    (WW) NVIDIA(0):     VertRefresh range (50.000-60.000 Hz) would exclude this
    (WW) NVIDIA(0):     mode's VertRefresh (40.1 Hz); ignoring VertRefresh check
    (WW) NVIDIA(0):     for mode "1680x1050".
    (II) NVIDIA(0): Assigned Display Device: DFP-0
    (WW) NVIDIA(0): No valid modes for "default"; removing.
    (WW) NVIDIA(0):
    (WW) NVIDIA(0): Unable to validate any modes; falling back to the default mode
    (WW) NVIDIA(0):     "nvidia-auto-select".
    (WW) NVIDIA(0):
    (II) NVIDIA(0): Validated modes:
    (II) NVIDIA(0):     "nvidia-auto-select"
    (II) NVIDIA(0): Virtual screen size determined to be 1680 x 1050
    (--) NVIDIA(0): DPI set to (129, 127); computed from "UseEdidDpi" X config
    (--) NVIDIA(0):     option
    (**) NVIDIA(0): Enabling 32-bit ARGB GLX visuals.
    (--) Depth 24 pixmap format is 32 bpp
    (II) do I need RAC?  No, I don't.
    (II) resource ranges after preInit:
        [0] -1    0    0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B)
        [1] -1    0    0x000f0000 - 0x000fffff (0x10000) MX[B]
        [2] -1    0    0x000c0000 - 0x000effff (0x30000) MX[B]
        [3] -1    0    0x00000000 - 0x0009ffff (0xa0000) MX[B]
        [4] -1    0    0xf8102400 - 0xf81024ff (0x100) MX[B]
        [5] -1    0    0xf8102000 - 0xf81020ff (0x100) MX[B]
        [6] -1    0    0xf8101c00 - 0xf8101cff (0x100) MX[B]
        [7] -1    0    0xf8101800 - 0xf81018ff (0x100) MX[B]
        [8] -1    0    0xf8101000 - 0xf81017ff (0x800) MX[B]
        [9] -1    0    0xdf2fe000 - 0xdf2fffff (0x2000) MX[B]
        [10] -1    0    0xfe227400 - 0xfe2274ff (0x100) MX[B]
        [11] -1    0    0xfe226000 - 0xfe2267ff (0x800) MX[B]
        [12] -1    0    0xfe227000 - 0xfe2273ff (0x400) MX[B]
        [13] -1    0    0xfe220000 - 0xfe223fff (0x4000) MX[B]
        [14] -1    0    0xfe226c00 - 0xfe226fff (0x400) MX[B]
        [15] -1    0    0xfe225000 - 0xfe225fff (0x1000) MX[B]
        [16] -1    0    0xfe200000 - 0xfe21ffff (0x20000) MX[B]
        [17] -1    0    0xd4000000 - 0xd5ffffff (0x2000000) MX[B](B)
        [18] -1    0    0xe0000000 - 0xefffffff (0x10000000) MX[B](B)
        [19] -1    0    0xd6000000 - 0xd6ffffff (0x1000000) MX[B](B)
        [20] 0    0    0x000a0000 - 0x000affff (0x10000) MS[B]
        [21] 0    0    0x000b0000 - 0x000b7fff (0x8000) MS[B]
        [22] 0    0    0x000b8000 - 0x000bffff (0x8000) MS[B]
        [23] -1    0    0x0000ffff - 0x0000ffff (0x1) IX[B]
        [24] -1    0    0x00000000 - 0x000000ff (0x100) IX[B]
        [25] -1    0    0x00001c60 - 0x00001c7f (0x20) IX[B]
        [26] -1    0    0x00001c20 - 0x00001c3f (0x20) IX[B]
        [27] -1    0    0x00001c18 - 0x00001c1b (0x4) IX[B]
        [28] -1    0    0x00001c40 - 0x00001c47 (0x8) IX[B]
        [29] -1    0    0x00001c1c - 0x00001c1f (0x4) IX[B]
        [30] -1    0    0x00001c48 - 0x00001c4f (0x8) IX[B]
        [31] -1    0    0x00001830 - 0x0000183f (0x10) IX[B]
        [32] -1    0    0x000001f0 - 0x000001f0 (0x1) IX[B]
        [33] -1    0    0x000001f0 - 0x000001f7 (0x8) IX[B]
        [34] -1    0    0x000001f0 - 0x000001f0 (0x1) IX[B]
        [35] -1    0    0x000001f0 - 0x000001f7 (0x8) IX[B]
        [36] -1    0    0x000018e0 - 0x000018ff (0x20) IX[B]
        [37] -1    0    0x000018c0 - 0x000018df (0x20) IX[B]
        [38] -1    0    0x000018a0 - 0x000018bf (0x20) IX[B]
        [39] -1    0    0x00001880 - 0x0000189f (0x20) IX[B]
        [40] -1    0    0x00001860 - 0x0000187f (0x20) IX[B]
        [41] -1    0    0x00001840 - 0x0000185f (0x20) IX[B]
        [42] -1    0    0x00002000 - 0x0000207f (0x80) IX[B](B)
        [43] 0    0    0x000003b0 - 0x000003bb (0xc) IS[B]
        [44] 0    0    0x000003c0 - 0x000003df (0x20) IS[B]
    (II) NVIDIA(0): Initialized GPU GART.
    (II) NVIDIA(0): Setting mode "nvidia-auto-select"
    (II) Loading extension NV-GLX
    (II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized
    (II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture
    (==) NVIDIA(0): Backing store disabled
    (==) NVIDIA(0): Silken mouse enabled
    (**) Option "dpms"
    (**) NVIDIA(0): DPMS enabled
    (II) Loading extension NV-CONTROL
    (WW) NVIDIA(0): Option "CalcAlgorithm" is not used
    (==) RandR enabled

    Hi,
    Please check whether you are able to open the PDF through  SOST transaction.
    http://www.sapdevelopment.co.uk/reporting/rep_spooltopdf.htm
    Best regards,
    Prashant

  • How to display an "All Day Event" date correctly in an integrated SSRS Report?

    I have two event items in a calendar list in SharePoint 2010. Both items have the same start time and end time. One of them, however, has the "All Day Event" checkbox checked. If I access them through a REST service, this is how the data is
    returned:
    <!-- item 1 -->
    <d:StartTime m:type="Edm.DateTime">2014-03-21T00:00:00</d:StartTime>
    <d:EndTime m:type="Edm.DateTime">2014-03-25T23:55:00</d:EndTime>
    <d:AllDayEvent m:type="Edm.Boolean">false</d:AllDayEvent>
    <!-- item 2 -->
    <d:StartTime m:type="Edm.DateTime">2014-03-21T00:00:00</d:StartTime>
    <d:EndTime m:type="Edm.DateTime">2014-03-25T23:59:00</d:EndTime>
    <d:AllDayEvent m:type="Edm.Boolean">true</d:AllDayEvent>
    I have a report in the same SharePoint 2010 site that uses SSRS in integrated mode. The data source is the calendar list mentioned above.  The date fields are not formatted, just displayed as them come from the list/database.
    My locale is set to en-US. When I run the report, the start date for item 1 is displayed as "3/21/2014" ('all day' set to false) but the start date for item 2 is displayed as "3/20/2014" which is incorrect ('all day' set to true).
    I did some research online and found out that SharePoint stores all date fields as UTC except for 'All Day Events', which are stored in local time (our servers are in Central Time, but I'm running the report fom Pacific Time, in the US).
    I coudn't find a solution to display the date correctly in the integrated SSRS report. Is there a way, maybe some straightforward formatting, to show All Day Event dates correctly? I tried adding hours but this is inconsistent with daylight saving hour changes.
    I would appreciate any help.
    C#, Sharepoint

    Hi SharpBud,
    The date for all day event stored in SQL in GMT time, the start time for an all day event returns the start time in GMT time, which is not the current time most likely.
    This is a confirmed issue, as a workaround, I would suggest you to use a calculate column for the event for the column, using the following format:
    IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],DATE(YEAR([Start Time]),MONTH([Start
    Time]),DAY([Start Time])+1)),[Start Time])
    Thanks,
    Qiao Wei
    TechNet Community Support

  • Drop down selection is not refreshing the page data correctly

    I have a drop down menu for a CREATE DATE. The record is based on a Identification ID. This identification ID can be in our system for multiple CREATE DATES. The CREATE DATE drop down is populated by checking how many different CREATE DATES there are for a particular identification ID.
    The last identification ID looks great. It populates all the fields. However if you select an older CREATE DATE, the page does not get populated with that CREATE DATES data. The whole page even looks different. It looks like an older version of the page before I added all the new fields.
    What is causing the older CREATE DATES not to look like the current create date.
    Please let me know if you need any additional information. Thank you!!
    Pattibz

    Thank you for your reply. Here are some details. Please let me know if I am not giving the information you need.
    One page item called CREATE_DATE displayed as a Select LIST with submit. The source type is a database column called ID_IDENT. The ID_IDENT is just a
    The List of Values is: Named LOV is the Select Named LOV. The list of values definition is a query against a table in our database:
    select CRE_DT, ID_IDENT from IDENT_TABLE where UNIQ_ID = :P3_UNIQ_ID
    ORDER BY CRE_DT DESC
    This is the date on which the ID_IDENT data was created. The date is displayed in a pull down menu, which allows you to select from a list of available "as of" dates when viewing the data. Only data on the most recent ID_IDENT record may be manually altered.
    I have a UNIQ_ID = '34587443'. It acutally has records in the database with that UNIQ_ID. The only difference is the create date. On record of 345778 was done on 10/1/2008 and the current date 10/13/2008.
    I see these two dates in the CREATE_DATE pull down menu. This is fine. The drop down orders the dates by DESC so I have the lastest date on top fo the pull down.
    If I want to pick the date of 10/1/2008, I need go into the web page, open the Create Date pull down and select the older date.
    When I pick the older date 10/1/2008, the page does not look the same as the defaul tor oldest time date and numbers.
    The page is different and most of the data on the webpage is there.
    What should have happened is that I choice the newer date from the drop down, the page refreshes with the data from 10/1/2008
    and all the parameters are displayed from the date of 10/1/2008. It isn't only part of the fields are being populated and the page itself doesn't refresh to the look of the original page. Thank you.
    Edited by: pattibz on Sep 12, 2008 11:15 AM

  • Correct end date to a present or future date - Section80 and 80C deduction

    Hi SDN,
    We are getting following exception for Section80 and Section80C deductions when click on Review button.
    Data valid only for past is not allowed, correct end date to a present or future date. Begin date should start from or higher. We have reset the valid From date to current financial year starting date(4/1/2011) using BADI HRXSS_PER_BEGDA_IN. But Valid to date it is taking 3/31/2011 instead of 3/31/2012. Where can we change this Valid to date. Please help me.
    regards,
    Sree.

    Dear Punna rao
    Kindly check this BADI HRXSS_PER_BEGDA_IN and activate it
    below is the IMG for this BADI
    Employee self service-->Service specific settings-->Own data->Change Default start date
    Ask your technical team to activate this BADI, as this requires access key to activate it. Kindly do this process & let me know the out comes, as this must solve ur issue.
    Hope this Info will be useful
    Cheers
    Pradyp

  • Is there a way to manually custom-edit the "Date created" and "Date modified" attributes of files in Mac OS 10.6?

    In "List View" in a Finder window, among the many column options are "Date created" and "Date modified." In "View Options" (command-J) for any folder, one can add these columns, along with the standard ones "Size," "Kind," "Label," etc.
    A user can alter a file's name, and a file's "label" (i.e. its color). On the other hand, a user can NOT arbitrarily edit/alter a file's official "size" -- other than by physically altering the contents of the file itself, obviously.
    But what about a file's "Date created" and "Date modified"? Can either of those be manually edited/changed, just as a file's name can be changed -- or is a file's creation-date an immutable attribute beyond the editorial reach of the user, just as a file's "size" is?
    And yes, a person can "alter" a file's "Date modified" by simply modifying the file, which would change its "Date modified" to be the moment it was last altered (i.e. right now). But (and here's the key question) can a user somehow get inside a file's defining attributes and arbitrarily change a file's modification date to be at any desired point in the past that's AFTER its creation date and BEFORE the present moment? Or will so doing cause the operating system to blow a gasket?
    If it is possible to arbitrarily manually alter a file's creation date or modification date, how would it be done (in 10.6)? And if it is NOT possible, then why not?

    sanjempet --
    Whew, that's a relief!
    But as for your workaround solution: All it will achieve is altering the created and modified dates to RIGHT NOW. What I'm looking to do is to alter the modification/creation dates to some point in the past.
    I'm not doing this for any nefarious reason. I just like to organize my work files chronologically according to when each project was initiated, but sometimes I forget to gather the disparate documents all into one folder right at the beginning, and as a result, sometimes after I finish a job, I will create a new folder to permanently house all the files in an old project, and when that folder is places in a bigger "completed projects" folder and then is organized by "Date created" or "Date modified" in list view, it is out-of-order chronologically because the creation and modification dates of that particular project folder reflect when the folder was created (i.e. today), and not when the files inside the folder were created (i.e. weeks or months ago).
    The simplest solution would simply to be able to back-date the folder's creation or modification date to match the date that the project actually started!

  • The value in flexfield context reference web bean does not match with the value in the context of the Descriptive flexfield web bean BranchDescFlex. If this in not intended, please go back to correct the data or contact your Systems Administrator for assi

    Hi ,
    We have enabled context sensitive DFF in Bank Branch Page for HZ_PARTIES DFF , We have created Flex Map so that only bank branch context fields are only displayed in the bank branch page and  as we know party information DFF is shared by supplier and Customer Page so we dint want to see any Bank Branch fields or context information in those pages.
    We have achieved the requirement but when open existing branches bank branch update is throwing below error message :
    "The value in flexfield context reference web bean does not match with the value in the context of the Descriptive flexfield web bean BranchDescFlex. If this in not intended, please go back to correct the data or contact your Systems Administrator for assistance."
    this error is thrown only when we open existing branches, if we save existing branch and open then it is not throwing any error message.
    Please let us know reason behind this error message.
    Thanks,
    Mruduala

    You are kidding?  It took me about 3 minutes to scroll down on my tab to get to the triplex button!
    Habe you read the error message? 
    Quote:
    java.sql.SQLSyntaxErrorException: ORA-04098: trigger 'PMS.PROJECT_SEQ' is invalid and failed re-validation
    Check the trigger and it should work again.
    Timo

  • Some photos are not sorted correctly by date

    I just imported some photos from my friends camera who were on the same trip. However, I noticed that a couple (not all) of his photos are not sorted correctly by date in the Event. A photo from day 3 somehow comes before the photos from day 2. I checked the extended information and the dates on the photo are correct. Has anyone had this problem and know of a solution? Thanks.

    It might be of some help to delete the photo cache. Please see this link - http://support.apple.com/kb/TS1314

Maybe you are looking for

  • User profile service failure

    I keep getting the message " The user profile service failed the logon. User profile cannot be loaded. " Anyone know what to do here?

  • Calendar sync in Maverick with out iCloud, iPhone 1st gen

    I would like to ask how to sync calendar with my iPhone 1st gen in Maverick? (With out iCloud??)

  • Not able to download updates to Apps in iTunes

    I have about 300 Apps.  I have selectively synced a few of them with my iPhone and iPad.  I am only able to download updates for the apps which I have synced.  For apps which are not synced, I am not able to download updates via the iTunes.  For exam

  • Does Aperture Recognize New Images?

    Sorry for the noob question... I use Aperture for a better way to tag/find/organize. For example, on my hard drive, I have an 'Animals' folder. This includes my dog, wildlife photography, etc. I should have it better organized, but as it stands, ALL

  • Max value

    Hi to All, I have to find out max salary from a table, without using max function... is there any other way to find... thanks and regards MKK