Blank User in table SWWUSERWI - Large Volume

Hi All,
I have a problem when user clicks on his inbox in SAP it times out.
There are large number of General tasks used in our system. The table SWWUSERWI has many entries with user name as Blank which are picked up and displayed in the user inbox. These are unwanted entries for the user. So I decided to delete these entries from the DB level (as a quick fix). After deleting these entries the inbox opened quickly. But to my surprise all the millions of entries reappeared in the table again. I am not sure what is happening in the system which is bringing back these entries. All these are very old WI in READY status.
I know we can run the report and delete them permanently but more interested in knowing why the user was blank in the table.
Regards,
Vijay V

Hi Vijay,
I'm also facing some problem with the inbox. Due to huge volume of work items, it is going to dump.
Please suggest me the program so as to permanently delete these unnecessary work items...
Look forward for your suggestion on this.
Thanks,
Naveen

Similar Messages

  • Create a GPT partition table and format with a large volume (solved)

    Hello,
    I'm having trouble creating a GPT partition table for a large volume (~6T). It is a RAID 5 (hardware) with 3 hard disk drives having a size of 3T each (thus the resulting 6T volume).
    I tried creating a GPT partition table with gdisk but it just fails at creating it, stopping here (I've let it run for like 3 hours...):
    Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
    PARTITIONS!!
    Do you want to proceed? (Y/N): y
    OK; writing new GUID partition table (GPT) to /dev/md126.
    I also tried with parted but I get the same result. Out of luck, I created a GPT partition table from Windows 7 and  2 NTFS partitions (15G and the rest of space for the other) and it worked just fine. I then tried to format the 15G partition as ext4 but, as for gdisk, mkfs.ext4 will just never stop.
    Some information:
    fdisk -l
    Disk /dev/sda: 256.1 GB, 256060514304 bytes, 500118192 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0xd9a6c0f5
    Device Boot Start End Blocks Id System
    /dev/sda1 * 2048 104861695 52429824 83 Linux
    /dev/sda2 104861696 466567167 180852736 83 Linux
    /dev/sda3 466567168 500117503 16775168 82 Linux swap / Solaris
    Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    Disk /dev/sde: 320.1 GB, 320072933376 bytes, 625142448 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x5ffb31fc
    Device Boot Start End Blocks Id System
    /dev/sde1 * 2048 625139711 312568832 7 HPFS/NTFS/exFAT
    Disk /dev/md126: 6001.1 GB, 6001143054336 bytes, 11720982528 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 65536 bytes / 131072 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/md126p1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
    gdisk -l on my RAID volume (/dev/md126):
    GPT fdisk (gdisk) version 0.8.7
    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present
    Found valid GPT with protective MBR; using GPT.
    Disk /dev/md126: 11720982528 sectors, 5.5 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 8E7D03F1-8C3A-4FE6-B7BA-502D168E87D1
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 11720982494
    Partitions will be aligned on 8-sector boundaries
    Total free space is 6077 sectors (3.0 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 34 262177 128.0 MiB 0C01 Microsoft reserved part
    2 264192 33032191 15.6 GiB 0700 Basic data partition
    3 33032192 11720978431 5.4 TiB 0700 Basic data partition
    To make things clear: sda is an SSD on which Archlinux has been freshly installed (sda1 for root, sda2 for home, sda3 for swap), sde is a hard disk drive having Windows 7 installed on it. My goal with the 15G partition is to format it so I can mount /var on the HDD rather than on the SSD. The large volume will be for storage.
    So if anyone has any suggestion that would help me out with this, I'd be glad to read.
    Cheers
    Last edited by Rolinh (2013-08-16 11:16:21)

    Well, I finally decided to use a software RAID as I will not share this partition with Windows anyway and it seems a better choice than the fake RAID.
    Therefore, I used the mdadm utility to create my RAID 5:
    # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
    # mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
    It works like a charm.

  • Large volume tables in SAP

    Hello All,
    Does any1 have the list of all large volume tables(Tables which might create a problem in select queries) present in SAP ?

    Hi Nirav,
    There is no as such specific list. But irrespective of the data in the table if you are providing all the primary key in the select query there will be no issue with SELECT.
    Still if you want the highest size table check transaction DB02.
    Regards,
    Atish

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • How to extract data from table for huge volume

    Hi,
    I have around 200000 material doc number for which need to get material number from MSEG table but while using SE16 it gives dump , i have even tried breaking it into batches of 20000 records but still SAP gives dump on executing SE16 for MSEG. Please advise if there is any alternate way to get data from SE16 table for such a large volume.
    Note: In our system SE16N does not work, only SE16 is there for our SAP version.
    Thanks,
    Vihaan

    Hi Jurgen,
    Thanks for your reply.
    I am getting Dump when i enter more than 5000 records as input parameter in MSEG, if I put more than that then it gives dump as "ABAP runtime errors    SAPSQL_STMNT_TOO_LARGE ".
    I understand that I can extract data restrciting 5000 every time but I have around 250000 material docs so that means if we consider batches of 5000 I need to run the step more 50 times--> 50 excel files. I wanted to avoid that as that is going to take lots of my time.
    Any suggestion, please help.
    Also wanted to highlight that apart from Material Doc number I am entering Plant (8 plants) and Mvt type (14 mvt type) also as input parameter.
    Regards,
    Vihaan
    Edited by: Vihaan on Mar 25, 2010 12:30 AM

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • UDT and UDF - User-defined Tables and Fields

    Dear All,
    I am writing a Query to permit the Cashier to check her Cash entries and balances on a Daily basis.
    Basically, it's a General Ledger, but I want the Query - Selection Criteria window to display only a few GL codes namely GL codes 1240601, 1240602, 1240603 etc.
    I don't know if I am doing it right. This is what I did (SAP B1 8.8):
    UDT
    I created a UDT called TEST2 using:
    Tools -> Customization Tools -> User-defined Tables - Setup
    UDF
    Then I created a field in the UDT called GlCod using User-Defined Fields - Management
    Title : GlCod
    Description : GL Code
    Type : Alphanumeric 30
    Field Data
    In the Field Data window, I ticked the Set Valid Values for Fields checkbox and filled in the blanks as follows:
    #                  Value                Description
    1                 1240601             Cash in Hand (Rs)
    2                 1240602             Cash in Hand (USD Notes)
    3                 1240603             Cash in Hand (Euro Notes)
    etc...
    Query
    Then I wrote my Query (see below).
    When I run it, I get the Selection Criteria screen as I wanted:
    Query - Selection Criteria
    GL Code                                   ...............   (arrow here)
    Posting Date                              ...............
    [OK]                [Cancel]
    When I click on the GL Code arrow, I get a window with the exact choices I need. It looks like this:
    1240601 -  Cash in Hand (Rs)
    1240602 -  Cash in Hand (USD Notes)
    1240603 -  Cash in Hand (Euro Notes)
    Executing the Query
    The Query seems to run normally, but nothing is generated on the screen, and there's no Error Message.
    What can be wrong about this query?
    I suspect that the GL codes in JDT1 and TEST2 are not of the same data type, so that INNER JOIN returns nothing.
    Thanks,
    Leon Lai
    Here's my SQL
    declare @TEST2 TABLE
    (GlCod varchar(30))
    declare @GlCod nvarchar (30)
    set @GlCod =/*SELECT T0.U_GlCod from [dbo].[@TEST2] T0 where T0.U_GlCod=*/  '[%0]'
    declare @refdt datetime
    set @ref=/*SELECT T1.RefDate from [dbo].[JDT1] T1 where T1.RefDate=*/ '[%1]'
    select
    t1.Account as 'GL Code',
    t1.RefDate as 'Posting Date',
    t0.U_GlCod as 'Restricted GL Codes'
    from JDT1 T1
    INNER JOIN @TEST2 T0 ON T0.[U_GlCod] = T1.[Account]
    WHERE
    t1.RefDate <= @refdt
    and
    t0.U_GLCod = @GlCod

    Try this:
    declare @GlCod nvarchar (30)
    set @GlCod =/*SELECT T0.U_GlCod from [dbo].[@TEST2] T0 where T0.U_GlCod=*/  '[%0]'
    declare @refdt datetime
    set @refdt=/*SELECT T1.RefDate from [dbo].[JDT1] T1 where T1.RefDate=*/ '[%1]'
    select
    t1.Account as 'GL Code',
    t1.RefDate as 'Posting Date'
    from JDT1 T1
    WHERE
    t1.RefDate <= @refdt
      and
    T1.[Account] = @GlCod
    (There is no need to declare the memoria table @test2 if you already created one table with this name.
    And there is no need to a join.)
    Edited by: István Korös on Aug 15, 2011 1:27 PM

  • Regarding table SWWUSERWI

    Hi All,
    I have a doubt related to workflow table.
    When Workflow triggers, a Workitem goes to the Sap Inbox of first Approver suppose.
    But he has not Clicked that Work item.
    At this point of time the status of the Work item is Ready.
    But for this Work item no user_ID is there in table SWWUSERWI.
    But when the Workitem is clicked by the user, but did not taken any action on that workitem, now the status is STARTED.
    Now there will be user_ID is table SWWUSERWI corresponds to that Work Item.
    My problem is that whenever a mail in sent in the SAP Inbox of the user,
    I need to inform him about the SAP inbox mail by sending a mail to him in Outlook.
    So pls guide me how shall I take the User_ID for the work item, if the status of the work item is READY.
    Is there a problem in Workflow design?
    Rishi

    Hi,
    You can configure Extended Notification for this. 
    Just specify the description for the task.
    and it will be sent to the user automatically.
    user's email id should be maintained in user maintenance and make sure that extended notification has been configured in your system.
    check scot settings and execute the report SWN_SELSEN.
    How to get Work items @ your Outlook Inbox
    check out this blog.
    Regards
    SM Nizamudeen

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • Managing large volumes of images

    I have a large volume of images. Mostly in raw format. I almost lost them all a few years ago when something happened to iPhoto 06. Since that time I avoided iPhoto and have been managing the file structure myself and using Lightroom.
    All my images are now stored on a NAS running Raid 0. I am feeling a little more secure now, so....
    ...I am interested to know what database improvements have been made to iPhoto. Is it save to use with that much data? Does it work well with Lightroom? How does it work with Aperture or does Aperture just replace iPhoto? Can the iPhoto or Aperture database reside on my NAS?
    Cheers.

    1. The protection against any database failure is a good current back up. iPhoto makes a automatic back up of the database file. This facilitates recovery from issues. However this is not a substitute for a back up of the entire Library.
    2. The number of images is what's important to iPhoto, not the total file size. iPhoto is good for 250k images and we've seen reports on here from folks with more than 100,000. So it will work with volume.
    3. It doesn't work with Lightroom at all. This is germane to Aperture as well.
    iPhoto, Lightroom and Aperture are all essentially Database applications. All of them want to manage the data and will only share the data under certain protocols.
    Aperture and Lightroom don't actually edit photos. When you process a photo in these apps, the file is not changed. Instead, the decisions are recorded in the database and applied live when you view the pic. With Lightroom the only way to get an edited image to iPhoto is to Export from LR and then import to iPhoto. (There was a plug-in to automate that process but I have no idea if it's been updated since LR 1.)
    Aperture can share it's Previews with iPhoto, but that's all. Otherwise you need to do the export/import dance.
    What communication there is between Aperture and iPhoto is designed to facilitate upgrading from iPhoto to Aperture. Yes, Aperture is a complete replacement for iPhoto.
    Neither the iPhoto nor Aperture Libraries can live on your NAS. However, the file management tools in Aperture are such that you can easily store the files on your NAS while the Library is on the HD. You can also do this with iPhoto but I wouldn't recommend it.
    Frankly, if you're a Raw shooter I don't understand why you would consider changing from the Pro level LR to a hiome user's iPhoto.
    Regards
    TD

  • Convert large volumes in Oracle

    I am trying to convert large tables from one instance to another (simple conversion).
    Table has same layout in both systems. Doing conversion with takes ages.
    So now we trying to convert via Oracle (exp/imp).
    The schema-id's are different, but this issue can be tackled.
    Now we found that the Oracle tables in the target system are always created with 'not null' contraint.
    Now the import is impossible as a consequence.
    Any ideas to tackle this, or any other ideas how to transfer large volumes.

    Hi,
    If I understand correctly that you want to copy a table contents from a system to another. At this stage, because of the constraints, it is not allow you to insert records at the target site.
    So, you exported the table from the source system, but the fields without "NOT NULL" constraint. On the other hand, at the target site, same fields have "NOT NULL" constraint.
    Under this circumstance, you may create the table first without "NOT NULL" fields at the destination and import the records in it.
    You can use "brspace -f tbexport ..." for the import/export operations. Check the note 646681 - Reorganizing tables with BRSPACE
    Best regards,
    Orkun Gedik

  • Table SWWUSERWI empty

    Hi Colleagues,
    I'm trying to implement a basic travel workflow approval process. I've created a basic organizational plan, as attached (organizational_plan.PNG). The user Maria and Manuel are both linked with valid system users.
    When I execute transaction TRIP with user Manuel, I'm able to create the trip as expected, but the problem is that, after saving the trip, table SWWUSERWI is not filled with the approval task for user Maria (that I expect to be the approver, following the organizational plan defined.
    Do you know which step I'm missing here? One additional info is that user Manuel is able to approve his own trip, which is also unexpected.
    Thanks and regards,
    Roberto Falk

    Hi Vignesh,
    I've activated the workflow WS20000050, and the log looks different now:
    Obejct Type
    Event 
    Curr Date
    Time
    Name of...
    Handler/Action
    Trace ON
    21.03.2014
    14:34:03
    F_EMPLOYEE
    BUS2089
    CHANGED 
    21.03.2014
    14:34:09
    No receiver entered
    BUS2089
    REQUESTCREATED
    21.03.2014
    14:34:09 
    WS20000050
    SWW_WI_CREATE_VIA_EVENT_IBF
    Trace OFF
    21.03.2014
    14:34:14
    F_EMPLOYEE
    How can I check which user was assigned as approver? (if was any) ?
    Thanks and regards,
    Roberto Falk

  • Initializing large volume 0FI_GL_4

    Just looking for some confirmation of the procedure for initialization of the 0FI_GL_4 extractor when dealing with very large volume.
    In the past, attempts at wide open initialization loads for 0FI_GL_4 have failed at my client.
    I was thinking of running an initialization with data for each fiscal period. Then for the last init containing the current month I would make the initialization selections like this:
    07/2007 - 12/9999
    Then since the initializations cover all of the previous periods and the future periods, would the delta loads function correctly? Can anyone confirm this method or give some more guidance on how you have initialized GL?
    Thanks!
    Justin

    Hi,
    I think you would be fine - but as such I hve never done init for GL....
    for FI you should go with ODS as 1st layer..
    Chk this out ->
    http://help.sap.com/saphelp_bw33/helpdata/en/af/16533bbb15b762e10000000a114084/content.htm
    Table BWOM2_TIMEST serves to document the loading history of Financial Accounting line items. It also provides defined restart points following incorrect data requests.
    Hope it helps
    Gaurav

  • How to pop-up a text box for a grid item on a user defined table?

    Hello,
    I have a user defined table with a grid & one of the columns is for comments. What I'm trying to do is to pop up a text box when double clicking in the column field. This would be the same as the Item Details column in the Sales Order ittems grid.
    Thanks,
    Ron

    Please post your question on SDK forum.  Only SDK could meet your need.
    Thanks,
    Gordon

  • Print Layout to User Defined Table

    Hello guyz..
    I want a make preprinted for my user query from user defined table where the header information is from another query.
    I guess, the base template User Report (system) can only show the repetitve data from one query but the header area is not,
    could you help, showing the header information from another query ??
    Thanks Regards

    Ria
    The word is that you cannot use the PLD with user defined tables (i wish it was possible).  So in any case to do that you have to use third party tools such as Crystal Reports and Reporting Services

Maybe you are looking for

  • Slow performance on LG Ally

    When I attempt to play games on my LG Ally - particularly any version of Angry Birds - the performance is extremely slow.  the screen does not respond to touch and the movement is stuttered or jerky at best.  I've attempted to restart, make sure no o

  • Crash on startup after 2008-002 security patch & Safari 3.1

    Mail is crashing on me immediately after starting with the following error: Process: Mail [1887] Path: /Applications/Mail.app/Contents/MacOS/Mail Identifier: com.apple.mail Version: 3.2 (919) Build Info: Mail-9190000~3 Code Type: X86 (Native) Parent

  • Prepositioning in WAAS 4.1.1c

    I am having trouble with this atm. I have setup a test share on a remote server,with permissions etc and configured the relevant fields on WAAS GUI under -"File Services,Preposition". I can broswe and find the directory and schedule the job to run. T

  • Alerts to the Vendors in SUS on the changes to PO/POR

    Hello All, Is there anybody who has worked with the Alerts? Our Business requires that the alerts be sent to the vendors or their representatives (Users) in the SUS environment in case of changes/rejection of the Purchase Order responses by the Buyer

  • Powersave Problem

    I have a problem using the powersave package. When I do a suspend, it appears to work as it should.  That is, it goes to sleep and when I hit a key it wakes up.  Magic!  The problem is after it wakes up, I have no network.  If I do an 'ifconfig' my e