Suppressing table continuation variable and other table questions

Hello all,
Is there any way to suppress the table continuation variable from being converted? Is the only way to conditionalize all the occurences in Framemaker?
Also, for some reason, the Framemaker to Robohelp paragraph mapping for my table caption paragraphs aren't being converted properly. Do table captions need to be treated differently from other paragraph tags?
Thanks.
Sue

James, I'm also curious about using a period after the table title, as I can't remember noticing periods after titles in any publications -- is it required by a specific style guide? My Chicago Manual of Style (12th, can't put my hands on 13th or 14th at the moment) does not use periods after titles, an in fact it eschews repeating the title, even. (ref. 12.17)
In FM 7.2 the table continuation variable is a bit schizophrenic, because while the variable content is suppressed (as expected) on the first page of the table, the variable symbol's on-screen horizontal space is not. If the variable's space had been suppressed it would be possible to do
Table 1. XYZ Operation//var//.
and thus have the result that you would like. Very annoying...

Similar Messages

  • Table continuation variable: philosophical question

    My apologies in advance if this is too off-topic.
    The Table Continuation variable (and its companion, the Table Sheet variable) are useful tools. But how
    i necessary
    are they?
    I ask because using the variables involves a certain amount of overhead (particularly the Table Sheet variable, when used in a table footer), and I'm not convinced the effort is worth it. These are just software user guides we're doing here, not aviation specs or military schematics. I can't imagine that our users won't be smart enough to flip the page to see if there's more information.
    Within my current group there are opposing forces on the necessity of using these variables. Does anyone here know of any studies into this? Or do any of you have any particular insight that will drive this standoff one way or another?
    Thanks!

    Ed:
    I find that the optimal place for a table continuation variable is in the table's caption, if the table's design includes a caption.
    Alternatively, I place it in one of the columns in a table's heading. In this case, I place it in one or the table's wider columns at the extreme left or right side of the table; for example, a "Description" column. A wide column ensures that FrameMaker will properly display the variable...
    I generally choose to use table continuation variables because I assume that a reader of the technical documentation I create will quite probably "dive bomb" into the middle of a doc looking for something; the table continuation variable helps them recognize that the table they've dived into begins on a preceding page...
    Cheers & hope this helps,
    Riley

  • Would like Table Continuation Variable added to a paragraph style, such as to a table title

    My groups uses both Frame 7.2 and Frame 9 and we would like to include the Table Continuation variable to our table title paragraph tag definition so that we don't have to manually add the variable. Is this possible now? If so, how?
    Thanks
    Sherri McCue
    Synopsys, Inc.

    Hi Sherri,
    I guess it is possible. I am attaching a screen shot you can follow that.
    this is for Framemaker 9 and I am not sure about framemaker 7.2
    I hope this helps you.,
    Thanks & Regards
    Priyank

  • Export - Import In ABAP ( for variables and internal table)

    how can we pass value for the variable and internal table using Export and Import?
    data: var type sy-uzeit.
    var = sy-uzeit.
    EXPORT var TO MEMORY ID 'TIME'.
    data: var type sy-uzeit.
    IMPORT var FROM MEMORY ID 'TIME'.
    write:/ var,sy-subrc,sy-uzeit.
    i found var value 0 while importing. 
    what is the right syntax for passing value of variable and internaltable.
    regards,
    dushyant.

    Hi,
    There are two possible solutions.
    Solution1:
    Program1.Should be run before atleast once so that TIME should be filled.
    data: var type sy-uzeit.
    var = sy-uzeit.
    EXPORT var TO MEMORY ID 'TIME'.
    Program2.IF the TIME is filled,then only it will produce the result.
    data: var type sy-uzeit.
    clear var.
    IMPORT var FROM MEMORY ID 'TIME'.
    write:/ var, sy-subrc, sy-uzeit.
    Solution2:
    Single program:
    data: var type sy-uzeit.
    var = sy-uzeit.
    EXPORT var TO MEMORY ID 'TIME'.
    clear var.
    IMPORT var FROM MEMORY ID 'TIME'.
    write:/ var, sy-subrc, sy-uzeit.
    Kindly reward points by clikcing the star on the left of reply,if it helps.

  • Interface tables, Seeded program and error Table

    Hi can any one let me know the interface tables, Seeded program and error tables for
    1) On hand_quantity Interface
    2) Sub inventory transfer conversion
    Version: 11.5.10
    DB: 10g
    Thanks & Regards,
    pallis

    Hi Pallis,
    Pl.refer following links for sub-inventory transfer.
    Inventory API interface transaction loading
    http://khalidmehmoodapps.blogspot.in/2010/08/inter-org-inventory-transfer.html
    http://khalidmehmoodapps.blogspot.in/2010/08/inter-org-inventory-transfer.html
    HTH
    sanjay

  • Binding for table produces list for other tables using foreign key and crea

    Using
    software Jdev 11G, WLS 11G, Oracle DB 11G, Windows Vista platform
    technology EJB 3.0, jspx, backing beans, session bean
    I cannot create a namedquery on my secondary table. The method for the column uses the entity object rather than the name and value of the column.
    For instance,
    (Coketruck) table has inventory records(Products) table
    Coketruck has one to many to the Products table
    Products has a many to one to the Coketruck
    I need to return the products from the product table based on the CokeTruck but I cannot create a namedQuery because the method in the Product table is an entity object type instead of a long that I can use to look up all the products based off the column truck_id.
    This is what I was expecting…
    Private Long truckId;
    public Long getTruckId() {
    return truckId;
    public void setTruckId (Long truckId) {
    this. truckId = truckId;
    Instead this is what I have…
    @ManyToOne
    @JoinColumn(name = "TRUCK_ID")
    private Coketruck coketruck;
    this. coketruck = coketruck
    public Coketruck getCoketruck() {
    return coketruck;
    public void set Coketruck (Coketruck coketruck) {
    this. coketruck = coketruck;
    How do I do a query on the Product table to return all the products that are in the coketruck?
    If I do the following it expects for me to pass the Entity Object which I cannot use as search criteria for my find method.
    @NamedQuery(name = "Products.findById", query = "select o from Products o where o.truckId = :truckId")
    On a different note but the same song…
    I noticed that when I look at my Session Bean Data Contols that the coketruck already has a list of the products. I have created a jsp page with a backing bean and have been able to use the namedquery on the coketruck entity to retrieve the productList. Unfortunately I need to sort the products by type and was also not able to find where to perform the work to be able to iterate through the productList to get my desired display. Therefore I started looking at doing another namedquery that would only retrieve the product_type ordering by the truckId.
    Seems I have come full circle… I don’t care what method I have to use to get the info back.
    Any help is greatly appreciated!

    user9005175 wrote:
    Hi!
    I work on an application wich uses a shopping cart stored in a database. The shopping cart uses two tables:
    CART: Holds information common for one shopping cart: the user it is connected to etc.
    - Primary key: CART_ID
    CART_ROW: One row in the cart, e.g. one new product to buy.
    - Primary key: ROW_ID
    - Foreign key: CART_ROW.CART_ID references CART.CART_ID
    From the code the rows in the cart are collected per cart, as is modelled by the foreign key. There exists one more relationship, which we use in the code, but which is not modelled by a foreign key in the database. One row can be dependent on another row, which makes the other row a parent.
    CART_ROW has a column PARENT_ID which references CART_ROW.ROW_ID.
    Should we add a foreign key for PARENT_ID? Or are there any questions to consider when it is a foreign key to the same table?
    I suggest to add foreign key it wont harm the performance (except while on insert when there would be validation for the foreign key). But it would prevent users to insert wrong/corrupt data either through code or directly by loggin in the database.
    A while ago we added indexes, both on ROW_ID and on PARENT_ID. Could the index on PARENT_ID have been harmful, since there is no foreign key?
    Index on parent_id would only be harmful if you do not make use of index after creating it (i.e. there is no query which make use of this index).
    And if you decide to have a foreign key on parent_id then I suggest to have index too on parent_id as it would be helpful atleast when you delete any record in this table.
    Best regards!

  • XML to Flat File DB Table So I and Others Can Use SSRS for Reporting

    I'm using a program called Orbeon Forms to quickly create and distribute forms on the local network.
    The program just recently got updated to Version 4.6 so as to use SQL Server DB as a backend. (I was using MySQL in a lower versions)
    The data created when you submit a form into the database is held in  xml fomat in one of the columns. (see below).
    I am trying to create a separate table, to pull that data from the xml column (for that particular form) and place it in to a seperate table using the xml tags as columns.
    I have no idea where to start or how to do this. Please keep in mind I'm a novice when it comes to sql server (but I do know how to do select statements :-)....
    Is this very difficult to do? If it is (for me at least) where can I go to get someone to do it for me along with the documentation to do it?  I was also thinking of a way to create a trigger on the Orbeon table (this is how it is setup currently in
    the MySQL database) to automagically insert any saved form data to the new form data table. Is this also difficult to do? And can I have a trigger for each form I create on the table?
    If anyone can help, I'd appreciate it. Thanks.
    I would eventually like to add the form data from the old MySQL database table into this new Orbeon table but it appears the upgrade also added two extra columns. I figuring once in the Orbeon table then I will have to run whatever statements to place into
    the separate table so I can use the data.
    Randy Marrero

    Erland thanks for the reply. Even after reviewing the link you sent I'm still having problems. Guess that is why I'm not a programmer. Anyway I've create a test data form xml I've created and it is copy and pasted below.
    Again I'm only really interested seeing if I can get the information out of the xml and create it in its own flat file table. I keep reading that creating a trigger (when the form gets submitted into this database in the xml format, an on insert trigger
    would take all the data submitted and inserted into another table with just the data, into separate columns)  for this is the best way but I'm not sure how to do this either. I have a copy of what the MySQL trigger would look like but keep running
    into errors when I try to run it in SQL Server.  Also the issue with how I would create another trigger if / when I create another form. I read somewhere I can only create one on insert trigger per table, still don't know this to be true. Anyway below
    is the xml as it is in the xml table in the database.
    XML STARTS HERE------------------
    <
    formxmlns:oxf="http://www.orbeon.com/oxf/processors"xmlns:xxi="http://orbeon.org/oxf/xml/xinclude"xmlns:xh="http://www.w3.org/1999/xhtml"xmlns:xi="http://www.w3.org/2001/XInclude"xmlns:ev="http://www.w3.org/2001/xml-events"xmlns:saxon="http://saxon.sf.net/"xmlns:xs="http://www.w3.org/2001/XMLSchema"xmlns:odt="http://orbeon.org/oxf/xml/datatypes"xmlns:fb="http://orbeon.org/oxf/xml/form-builder"xmlns:xxf="http://orbeon.org/oxf/xml/xforms"xmlns:xbl="http://www.w3.org/ns/xbl"xmlns:xpl="java:org.orbeon.oxf.pipeline.api.FunctionLibrary"xmlns:sql="http://orbeon.org/oxf/xml/sql"xmlns:p="http://www.orbeon.com/oxf/pipeline"xmlns:fr="http://orbeon.org/oxf/xml/form-runner"xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"xmlns:xf="http://www.w3.org/2002/xforms"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xmlns:exf="http://www.exforms.org/exf/1-0">
      <
    Header>
        <
    HHIFR_Patch>/fr/service/persistence/crud/bfp/permits/form/738433584d6c0aff25e0ab8ca1bd99bcc7e10b49.bin</HHIFR_Patch>
      </
    Header>
      <
    Permit_Number_Hdr>
        <
    Permit_Number>Test123</Permit_Number>
      </
    Permit_Number_Hdr>
      <
    Permit_Info_Hdr>
        <
    Permit_Date>2014-07-15-05:00</Permit_Date>
        <
    Permit_Station>HQ</Permit_Station>
        <
    Permit_Employee_Number>0676</Permit_Employee_Number>
        <
    Permit_Status>Active</Permit_Status>
        <
    Employee_Name>Randy
    Marrero</Employee_Name>
      </
    Permit_Info_Hdr>
      <
    Applicant_Info_Hdr>
        <
    Applicant_Name>Randy
    Marrero</Applicant_Name>
        <
    Applicant_Phone>8434741099</Applicant_Phone>
        <
    Applicant_Email>[email protected]</Applicant_Email>
      </
    Applicant_Info_Hdr>
      <
    Mailing-Address-Section>
        <
    Mailing_Address>40
    Summit Dr</Mailing_Address>
        <
    Mailing_Suite/>
        <
    Mailing_City>Hilton
    Head Island</Mailing_City>
        <
    Mailing_State>SC</Mailing_State>
        <
    Mailing_Zip>29926</Mailing_Zip>
      </
    Mailing-Address-Section>
      <
    Physical_Burn_Hdr>
        <
    Physical_Address>35
    Summit Dr</Physical_Address>
        <
    Physical_Suite/>
        <
    Physical_City>Hilton
    Head Island</Physical_City>
        <
    Physical_State>South
    Carolina</Physical_State>
        <
    Physical_Zip>29926</Physical_Zip>
      </
    form>
    XML ENDS HERE ------------
    Randy Marrero

  • Interactively changing values to table problem: indicator and control table, and why it does not work after a while...?

    I have been producing a VI that loads a set of data and displays it in a table: A table control is initialised with an empty variable, the loaded data takes the place of the variable and fills the table, while some headers are added. That table feeds an intensity graph to give a pictorial impression of the data.
    With this scheme, a user can change any value of the table, and the changes are interactively reflected on the graph.
    Problem: after few saving of the VI, the access to the table doesn't work anymore. It is the same with all my numerical controls in the VI where it should be possible to enter a value.
    If anybody has an idea on the potential
    causes of these problem, I would be really grateful as it is very useful...when it works !
    Regards,
    Elie Allouis

    I can not image what is causing the error. Would you be willing to post some code to see if we can reproduce the problem?
    Jeremy7

  • Find missing values in third table based on two other tables

    Hi- I have tried to do this on my own but the answer eludes me. Can someone please point me in a better direction? We have a table project_categories with project_IDs and related project category_IDs. We have another table contact_categories with contact_IDs and related contact category_IDs. We use these two tables to link contacts to projects by at least one category_ID. It works well for email notifications. We have a third table for "planholders" who are interested in particular projects. The planholder table has contact_IDs and related project_IDs. I want to find contacts who have expressed interest in a project, by inserting a row in the planholder table, but they have not selected any of the project categories in contact categories. This would be an unusually rare exception, so to test it I added myself as a contact 100010 and as a project "planholder", but deliberately did not select any of the related project categories in contact categories.
    select ph.contact_id, ph.project_id, pc.category_id, cc.category_id
    from purchasing_planholder ph, purchasing_project_categories pc, purchasing_contact_categories cc
    where ph.project_id=pc.project_id
    and ph.contact_id=cc.contact_id
    and ph.contact_id||ph.project_id||cc.category_id
    not in (select ph.contact_id||pc.project_id||pc.category_id
    from purchasing_planholder ph, purchasing_project_categories pc, purchasing_contact_categories cc
    where ph.project_id=pc.project_id
    and ph.contact_id=cc.contact_id)
    and ph.contact_id=100010
    This gives me every category for the test contact and every category for the project. I want to find contacts in the planholder table who have not selected any of the categories related to the specific project they selected in the planholder table. I tried minus, outer joins, etc. Does this require PL/SQL and a temp table, or can I find the exceptions with a report? Or do I need a different approach? The ideal answer would display one row with the contact_ID and the project_ID, where the contact has not selected any of the project categories. This would be used to alert them that they need to select at least one of the project categories on the contact form. Thanks for any feedback.
    Peter
    CONTACT_ID PROJECT_ID CATEGORY_ID CATEGORY_ID
    100010 701 1 19
    100010 701 1 18
    100010 701 16 19
    100010 701 16 18
    100010 701 21 19
    100010 701 21 18
    100010 701 24 19
    100010 701 24 18
    8 rows selected

    Hi User,
    When you Post a Question in Forum, Please be clear of the Inputs to be given, the expected output and the tries you made,
    the errors/results you faced.
    Also, see this link of BluShadow.
    SQL and PL/SQL FAQ
    Please enclose the query or programs in code tags for a formatted code.
    Just Try something like this,
    SELECT *
      FROM emp c
    WHERE NOT EXISTS (SELECT *
                         FROM emp a
                        WHERE EXISTS (SELECT *
                                        FROM emp b
                                       WHERE a.empno = b.empno));In the Inner Query, the conditon checked is empno same in both the tables, and for example some rows are obtained, other than
    this rows all other results are obtained from the outer query.
    In the above case no rows are returned.
    Thanks,
    Shankar

  • Insert data Flatfile table to Parent and child tables

    Hi All,
        I have Flatfile table which is we getting from daily txt files and i want to populate flatfile data to parent as well as child tables. parent table has primary key as ID. This ID is foreign key of child tables. 
    Here i have mentioned our process.
    1. Flatfile duplicates has to remove with condition of daily date.
    2. Before Insert parent table have to check duplicate(we have Unique key of 4 columns in parent) if duplicate is there we have to delete duplicate then Insert into parent table unique records(Primary key is ID).(here why we are delete duplicate record meaning
    we getting daily files so if any records updated in future that record should be insert to parent and child table so only we delete old records and insert new records).
    3.After insert parent we have to populate child tables from Flatfile table as well as we have to insert parent table primary key as foreign key of child tables.
    4. if any truncation error occurs that errors should go to error log table.
    Right now we are using cursor for this concept and cursor has performance issue. so we are trying to optimize other way to populate parent and child table to increase performance to populate.
    Please help us to which way to do this process increase of performance .
    Please  can any one reply.

    Hi RajVasu,
    Since this issue is related to Transact-SQL, I will move this thread to Transact-SQL forum. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Mismatch between Master data table /BIC/P and SID table /BIC/S

    Hello All,
    in our master data table we have logsys in key
    after a system copy we had a problem with logsys name, when we have copied our bw prod system in test system we find prod logsys like value, the BDLS doesn't change all values for all structures and some master data table still have prod logsys instead of test logsys but some records have test logsys values
    for example:
    our production logsys is: BIP300 and our test logsys is BIT300
    in our master data test table we find had some records logsys value like BIP300 and others with BIT300
    all our ODS and CUBE have the right logsys value in test "BIT300"
    we have developed an ABAP report to execute on selected tables to change logsys from BIP300 to BIT300
    we execute the report on only P master data table like /BIC/PZCOOR and it's work fine and has changed all the records values
    from BIP300 to BIT300 and this program has eliminated all duplicate records
    the problem is when on our master data try to view data we find that bw return data from SID table /BIC/SZCOOR
    in fact if we go to control data on /BIC/SZCOOR table we find that there still are records with logsys BIP300 and BIT300
    and the same if we execute our query on ZCOOR we have values with BIP300 and BIT300
    how should we fix this problem?
    if we execute our ABAP report on /BIC/SZCOOR to change logsysname we can fix that or we could have problem?
    please can give me some useful help
    thanks a lot
    Bilal

    Hi ,
    Follow the sequence of deletion .
    1.Delete data from cube YMM_*
    2.Delete data from 0pur_c*
    3.Delete data from 0material .
    4.Delete data from info_rec .
    Best option would be do selective deletion for these four materials and run a repair request then after to update into data target.
    Hope that helps.
    Regards
    Mr Kapadia

  • To populate dynamically created int table with data from other table

    Hi everybody,
    I have already created an internal table dynamically, but now want to populate it with data from another IT depending on the plant name.
    My dynamic int table contains fields with plant name like '8001' ,'8002' and so on.
    no I want to read data from the other table and depending on bwkey which contains similar data like plant name , want to append to this new dynamic int table through read key statement.
    I cannot reference the field name hard coded as it does not allow field symbol reference to be hard coded.
    Pls help.

    Hi,
    Check the code below:
    REPORT  ztestdyn.
    TYPE-POOLS : slis.
    TABLES: yyle0003.
    DATA:
      g_exit    TYPE c,
      g_save    VALUE 'A',               "For parameter I_SAVE
      g_repid   LIKE sy-repid,           "For program name
      g_variant TYPE disvariant.         "For parameter IS_VARIANT
    *Tables
      DATA: d_ref TYPE REF TO data,
            d_ref1 TYPE REF TO data,
            i_alv_cat1 TYPE TABLE OF lvc_s_fcat,
            ls_alv_cat1 LIKE LINE OF i_alv_cat1.
      DATA: BEGIN OF total_tab OCCURS 0 ,
            tknum TYPE yyle0003-tknum,
            quantity TYPE p,  "yyle0003-QUANTITY,
            END OF total_tab.
      DATA: BEGIN OF g_scandata_tab OCCURS 0.
              INCLUDE STRUCTURE yyle0003.
      DATA: END OF g_scandata_tab.
      DATA: g_yyle0003_tab LIKE yyle0003 OCCURS 0 WITH HEADER LINE.
      DATA: g_itab1 TYPE TABLE OF yyle0003.
      DATA: wa_itab1 LIKE g_scandata_tab.
      TYPES: BEGIN OF itab2,
             tknum TYPE yyle0003-tknum,
             vhilm TYPE yyle0003-vhilm,
             quantity TYPE p,
             END OF itab2.
      DATA: g_itab3 TYPE TABLE OF itab2.
      DATA: wa_itab3 TYPE itab2.
      DATA: g_itab5 TYPE TABLE OF itab2.
      DATA: wa_itab5 TYPE itab2.
      DATA: g_itab4 TYPE TABLE OF itab2.
      DATA: wa_itab4 TYPE itab2.
      DATA: gv_wa TYPE REF TO data.
      DATA : wa_tab TYPE itab2.
      DATA: BEGIN OF itab6 OCCURS 0,
             vhilm TYPE yyle0003-vhilm,
             quantity TYPE p,
             END OF itab6.
    ******************Start of Internal Table Definition *******************
      DATA:
            g_custom_container_0100 TYPE REF TO cl_gui_custom_container,
            g_alv_grid_0100    TYPE REF TO cl_gui_alv_grid,
            g_container_0100   TYPE scrfname VALUE 'LIST',
            g_mylayout         TYPE lvc_s_layo,
            ok_code            LIKE sy-ucomm.
      FIELD-SYMBOLS :<f_fs> TYPE table,
                     <f_fs11> TYPE table,
                     <f_fs1> TYPE table,
                     <f_fs3> TYPE ANY,
                     <f_fs4> TYPE ANY,
                     <f_field> TYPE ANY,
                     <f_fs5> TYPE ANY.
      FIELD-SYMBOLS: <fs_wa> TYPE ANY.
      DATA: l_var TYPE i,
            l_i   TYPE i.
      DATA: l_var1 TYPE char20,
            l_var2 TYPE char20.
    DATA: l_TOTAL TYPE I,
          L_FILL TYPE i,
          L_TOT  TYPE I.
    DATA: l_int TYPE i,
           l_sum TYPE i.
    FIELD-SYMBOLS: <f_fs2> TYPE  itab2, "
                     <f_fs6> TYPE ANY,
                     <f_fs7> TYPE ANY.
      DATA: l_var3 TYPE char15.
      DATA: l_quant TYPE p.
    FIELD-SYMBOLS: <f_fs8> LIKE itab6, "
                     <f_fs9> TYPE ANY,
                     <f_fs10> TYPE ANY.
    FIELD-SYMBOLS : <f_fs12> TYPE ANY,
                      <f_fs13> TYPE ANY.
      SORT g_scandata_tab BY tknum vhilm.
      LOOP AT g_scandata_tab INTO wa_itab1.
        MOVE-CORRESPONDING wa_itab1 TO wa_itab3.
        APPEND wa_itab3 TO g_itab3.
      ENDLOOP.
      LOOP AT g_itab3 INTO wa_itab3.
        COLLECT wa_itab3 INTO g_itab4.
      ENDLOOP.
      LOOP AT g_itab4 INTO wa_itab4.
        MOVE-CORRESPONDING wa_itab4 TO wa_itab5.
        MOVE-CORRESPONDING wa_itab4 TO itab6.
        APPEND wa_itab5 TO g_itab5.
        COLLECT itab6.
      ENDLOOP.
      CLEAR wa_itab3.
      SORT g_itab4 BY tknum vhilm.
      DELETE ADJACENT DUPLICATES FROM g_itab4 COMPARING vhilm.
      DESCRIBE TABLE g_itab4 LINES l_var.
      l_i = '2'.
      ls_alv_cat1-fieldname = 'TKNUM'.
      ls_alv_cat1-col_pos = 1.
      ls_alv_cat1-coltext ='ShipmentNo.'.
      APPEND ls_alv_cat1 TO i_alv_cat1.
      DATA: l_var4(10) TYPE c,
            l_var5(10) TYPE c,
            l_fieldname(20) TYPE c..
      LOOP AT g_itab4 INTO wa_itab4.
        IF l_var >= 1.
          CONDENSE wa_itab4-vhilm NO-GAPS.
          ls_alv_cat1-fieldname = wa_itab4-vhilm. "l_fieldname.
          ls_alv_cat1-col_pos = l_i.
          ls_alv_cat1-coltext = wa_itab4-vhilm.
          ls_alv_cat1-do_sum  ='X'.
          APPEND ls_alv_cat1 TO i_alv_cat1.
          CLEAR : ls_alv_cat1, l_fieldname.
          l_i = l_i + 1.
        ENDIF.
        AT LAST.
          ls_alv_cat1-fieldname = 'TOTAL'. "l_fieldname.
          ls_alv_cat1-col_pos = l_i.
          ls_alv_cat1-coltext = 'TOTAL'.
          ls_alv_cat1-do_sum  ='X'.
          APPEND ls_alv_cat1 TO i_alv_cat1.
          CLEAR : ls_alv_cat1, l_fieldname.
        ENDAT.
        SORT i_alv_cat1 BY fieldname.
        DELETE ADJACENT DUPLICATES FROM i_alv_cat1.
      ENDLOOP.
      SORT i_alv_cat1 BY col_pos.
      CALL METHOD cl_alv_table_create=>create_dynamic_table
        EXPORTING
          it_fieldcatalog = i_alv_cat1
        IMPORTING
          ep_table = d_ref.
      ASSIGN d_ref->* TO <f_fs>.
      CREATE DATA gv_wa LIKE LINE OF <f_fs>.
      ASSIGN gv_wa->* TO <fs_wa>.
      DELETE ADJACENT DUPLICATES FROM <f_fs> COMPARING ALL FIELDS.
        LOOP AT itab6.
        CLEAR wa_itab5.
        wa_itab5-tknum = 'Total'.
        MOVE-CORRESPONDING itab6 TO wa_itab5.
        APPEND wa_itab5 TO g_itab5.
        CLEAR wa_itab5.
      ENDLOOP.
         DESCRIBE TABLE g_itab5 LINES L_TOT.
           LOOP AT TOTAL_TAB.
          L_TOTAL = L_TOTAL + total_tab-quantity.
         ENDLOOP.
      LOOP AT g_final ASSIGNING <f_fs2>.
        ASSIGN COMPONENT 'TKNUM' OF STRUCTURE <f_fs2> TO <f_fs6>.
        ASSIGN COMPONENT 'TKNUM' OF STRUCTURE <fs_wa> TO <f_fs7>.
        <f_fs7> = <f_fs6>.
        CONDENSE <f_fs2>-vhilm NO-GAPS.
        ASSIGN COMPONENT 'VHILM' OF STRUCTURE <f_fs2> TO <f_fs3>.
        ASSIGN COMPONENT 3 OF STRUCTURE <f_fs2> TO <f_fs4>.
        MOVE <f_fs3> TO l_var1.
        ASSIGN COMPONENT l_var1 OF STRUCTURE <fs_wa> TO <f_fs5>.
        <f_fs5> =  <f_fs4>.
        CLEAR total_tab-quantity.
        READ TABLE total_tab WITH KEY tknum = <f_fs6>.
        IF sy-subrc = 0.
          ASSIGN total_tab-quantity TO <f_fs12>.
          ASSIGN COMPONENT 'TOTAL' OF STRUCTURE <fs_wa> TO <f_fs13>.
          <f_fs13> = <f_fs12>.
        ENDIF.
        L_FILL = L_FILL + 1.
        IF L_FILL = L_TOT.
         ASSIGN L_TOTAL TO <f_fs12>.
          ASSIGN COMPONENT 'TOTAL' OF STRUCTURE <fs_wa> TO <f_fs13>.
          <f_fs13> = <f_fs12>.
        ENDIF.
        AT END OF <f_fs2>-tknum.
          APPEND <fs_wa> TO <f_fs>.
          CLEAR  <fs_wa>.
        ENDAT.
      ENDLOOP.
      CLEAR: <f_fs6>,
              <f_fs7>.
      CLEAR <fs_wa>.
    CALL SCREEN 0100.
    *&      Module  STATUS_0100  OUTPUT
          text
    MODULE status_0100 OUTPUT.
      SET PF-STATUS 'ZVKS'.
      SET TITLEBAR 'ZVKS'.
      CHECK sy-ucomm IS INITIAL.
      SORT g_scandata_tab BY tknum vhilm.
      CREATE OBJECT g_custom_container_0100
             EXPORTING container_name = g_container_0100
             EXCEPTIONS
               cntl_error = 1
               cntl_system_error = 2
               create_error = 3
               lifetime_error = 4
               lifetime_dynpro_dynpro_link = 5.
      CREATE OBJECT g_alv_grid_0100
             EXPORTING i_parent = g_custom_container_0100.
      g_mylayout-grid_title = 'Display Scanning data'.
      CALL METHOD g_alv_grid_0100->set_table_for_first_display
        CHANGING
          it_outtab                     = <f_fs>
          it_fieldcatalog               = i_alv_cat1
                    EXCEPTIONS
                      invalid_parameter_combination = 1
                      program_error                 = 2
                      too_many_lines                = 3
                      OTHERS                        = 4.
      IF sy-subrc <> 0.
                  MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                             WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    ENDMODULE.                 " STATUS_0100  OUTPUT
    Regards
    Kannaiah

  • Rptcurrency in WB table, fac2 tabke and fact table

    I loaded a data by using a transformation file with the following logic for reptcurrency
    RPTCURRENCY= *newcol(LC)
    The data was successfully loaded, then I check the WB table, fac2 table and fact table respectively, I notice the following:
    In WB table,          rptcurreny    ....     signeddata
                                   LC                         5000
    In Fac2 table     rptcurreny    ....     signeddata
                                 USD                         5000
                                 EURO                     5000
    In fact table          rptcurreny    ....     signeddata
                                 USD                        5000
                                 EURO                     5000
                                 LC                          5000
    I have USD,EURO and LC in the rptcurrency,USD and EURO's reporting property value as 'Y'. looks like when the system load the  LC data, it copy the LC data to USD and EURO, but for me it not make scence the same  value with different currency.
    Anybody can explain this?

    JW,
    Please check your default.LGF formula at Admin console.
    Default.LGF is called 'DEFAULT' which means it will be called whenever user posts their data to server.
    If default.LGF includes baisc currency translation statement, it will convert value behind the scene
    Of course, user can execute it again through Data manager later.
    Thanks
    James Lim.

  • How does Smartware handle backing up two drives to My Cloud...and other Smartware questions

    When you set up backup from two disk drives on one PC to My Cloud via Smartware, do they both backup simultaneously, or do I need to do one, then the other? I have a Windows 8.1 laptop with two hard drives, one is 500GB, the other 1 TB. After installing My Cloud (3TB) and Smartware, I went into Smartware and set up my backups. I selected various folders on each drive to back up, and I also set up two different folders on My Cloud to receive these files. Setup is like this: PC Drive ->                                 My Cloud FolderSeveral folders in G: drive ->    Pictures, Video folder on My CloudDesktop and User folder onmain drive                           ->     Users folder on My Cloud Every time I open my PC at home, I start Smartware, click the Backup tab and let the backup continue. I feel like I need to manually go into the Home and Backup tab for each backup plan (one per drive) and restart it. Regardless, my question is this: is Smartware able to handle these two drives on my laptop backing up onto My CLoud? Other Smartware questions:- Does my PC have to be on my home network (where My Cloud is installed) in order to run Smartware? Can I be on the internet outside of my home and run Smartware?- I run Outlook 2010. Will Smartware back up the latest Outlook file automatically as new files are created (basically, every time I run Outlook)? Much appreciated. - Paul M.

     
    Hello,
    When you set up backup from two disk drives on one PC to My Cloud via Smartware, do they both backup simultaneously, or do I need to do one, then the other?
    You will need to do one first and then the other and the last one that is selected will be done automatically.
    - Does my PC have to be on my home network (where My Cloud is installed) in order to run Smartware?
    Yes.
    Can I be on the internet outside of my home and run Smartware?
    WD Smartware does not perform remote backups. 
     

  • Adding a RAID card to help speed up export (and other drive question) in Premiere Pro CC

    First of all, I have read Tweakers Page exporting section because that is where my primary concern is. First my questions, then background and my current and proposed configurations:
    Question 1: Will adding a hardware RAID controller, such as an LSI MegaRAID remove enough burden from the CPU managing parity on my software RAID 5 that the CPU will jump for joy and export faster?
    Question 2: If true to above, then compare thoughts on adding more smaller SSDs for either a one volume RAID 0 or smaller two volume RAID 0 to complement existing HDD RAID 5. That is, I'm thinking of buying four Samsung 850 Pro 128 GB SSDs to put in a four disk volume to handle everything (media/projects, media cache, previews, exports), or split it up into two volumes of two disks each and split the duties, or keep the four disk volume idea and put the previews & exports on my HDD RAID 5 array.
    The 850's are rated at SEQ read/write: 550/470 MB/s thus I could get around 2000/1500 MB/s read write in a four disk RAID 0 or 1/2 that if I split into two volumes to minimize volumes from reading/writing at the same time, if that really matters with these SSDs?
    The Tweaker's page made a few comments. One is splitting duties among different disks, rather than a large efficient RAID may actually slow things down. Since the SSDs are much faster than a single HDD, I'm thinking that is no longer accurate, thus I'm leaning toward the Four disk configuration putting OS & Programs on C drive, Media & Projects on D (HDD RAID 5), Pagefile & Media Cache on SSD (2-disk RAID 0) and Previews &Exports on 2nd SSD RAID 0 (or combine the two RAID 0's and their duties).
    Just trying to get a perspective here, since I haven't purchased anything yet. Any experience/stories, I would appreciate.
    My current drive configuration:
    My D drive is software RAID 5 consisting of four 1 TB Western Digital RE4 (RED) 7200 RPM HDDs with a CrystalDiskMark SEQ Read/Write of 339/252 MB/s.
    The C drive is SSD 500 GB (Samsung 840 (not Pro) and does 531/330 MB/s. My OS, Program Files and Page File are on C, and data/media files/project, etc all are on the RAID drive.
    Problem:
    Current setup allows for smooth editing, only the exporting seems slow, often taking between two and two and a half times the video length to export. Thus a 10 minute video takes 20-30 minutes to export. 15 minute video can take 30-40 minutes to export. The first 10% of the two-pass export takes under a minute (seems fast), but it gets slower where the final 10 or 20% can hang for many minutes like my system is running out of steam. So where is the waste?
    I have enabled hardware acceleration (did the GPU hack since my GPU isn't listed) and it may spike at 25% usage a few times and eat up 600 MB of VRAM (I have 2 GB of VRAM), otherwise it is idle the whole export. The CPU may spike at 50% but it doesn't seem overly busy either.
    Our timeline is simple with two video streams and two audio streams (a little music and mostly voice) with simple transitions (jump cuts or cross dissolves). We sometimes fast color correct, so that might use the GPU? Also, since we film in 1080 60P and export 1080 29.97 frames/sec, I think that is scaling and uses the GPU. I know without the GPU, it does take a lot longer. I have ruled out buying a faster GPU since it doesn't appear to be breaking a sweat. I just need to know if my system is bottlenecked at the hard drive level because I'm using software RAID and my disks are slow and will hardware RAID significantly reduce the CPU load so it can export faster.
    Our files are not huge in nature. Most our clips are several MBs each. Total project files are between 5 GBs and 10 GBs for each video with Windows Media File export being 500 MB to 1.2 GB on average. We shoot using Panasonic camcorders so the original files are AVCHD, I believe (.MTS files?).
    Considerations:
    1. I'm thinking of buying (and future proofing) an LSI Logic MegaRAID 9361-8i that is 12Gb/s SAS and SATA (because some current SSDs can exceed the 6Gb/s standard).
    2. I'm not replacing my current RAID 5 HDDs because not in my budget to upgrade to 6 or more large SSDs. These drives are more important to me for temporary storage because I remove the files once backed up. I don't mind a few inexpensive smaller SSDs if they can make a significant difference for editing and exporting.
    I can only guess my HDD RAID is slow but the CPU is burdened with parity. I would imagine running RAID 10 would not help much.
    My setup:
    my setup:
    CPU - i7-3930K CPU @4.5 GHz
    RAM - G.SKILL Ripjaws Z Series 32GB (4 x 8GB) DDR3 2133 @2000
    Motherboard - ASUS P9X79 WS LGA 2011
    GPU - Gigabyte GeForce GTX 660 OC 2GB (performed the compatibility list hack to enable hardware acceleration).
    C drive - 500 GB Samsung 840 SSD (Windows 7 Pro 64 bit and programs).
    D drive - four 1 TB WD RE4 Enterprise HDDs 7200 RPMs in software RAID 5
    Case - Cooler Master HAF X
    CPU Fan - Cooler Master Hyper 212 EVO with 120 mm fan
    Power Supply - Corsair Pro Series AX 850 Watt 80 Plus Gold
    Optical Drive - Pioneer BDR - 208DBK
    thanks in advance,
    Eric

    ........software RAID 5 off the motherboard ??????......NOT a good idea, from what I have read here on this forum from experts like Harm Millard and others. They have mentioned a LARGE overhead on the CPU doing this....causing sub-par performance. RAID 0 off the motherboard will NOT do this, however.....RAID 0 would provide optimum speed, but, with the risk of total data loss if ANY drive fails. You may wish to reconfigure your RAID to be RAID 0...BUT...you would need to DILIGENTLY back up its entire volume onto perhaps a quality 4TB drive very frequently.
         A lot depends on the nature of your current and FUTURE codecs you plan to edit. You may not want to sink a lot of money into an older setup that may have trouble with more demanding future codecs. For now, in the 1080p realm, your rig should be OK....the read/write performance on your CURRENT RAID 5 setup is not great, and a definite drag on the performance. The rest of your components appear to be fine.....the Samsung SSD, though not ideal, is OK.....it's write speed is WAY lower than the Pro model,but, the drive is used mainly for reading operations. Since you have Windows 7 Pro, and NOT Windows 8.......you CAN put the entire windows page file onto the RAID 0 you might create.....this will take that frequent read/write load OFF the SSD. Read the "tweakers Page" to see how to best TUNE your machine. To use your current setup most efficiently, without investing much money, you would :a. create the RAID 0 off the motherboard, ( putting all media and project files on it )  b. install a quality 7200rpm 4TB HDD to serve as a BACKUP of the RAID array. Then, install a Crucial M550 256GB or larger SSD, ( close in performance to Samsung 850 Pro...much cheaper), to put all previews, cache , and media cache files on....AND to use as " global performance cache" for After Effects...if you use that program. Exporting can be done to ANOTHER Crucial M550 for best speed...or, just to the either the FIRST Crucial or, the 4TB drive. Your current GPU will accelerate exports on any video containing scaling and any GPU accelerated effects. Your CPU is STILL important in SERVING the data to and from the GPU AND for decoding and encoding non-GPU handled video....your high CPU clock speed helps performance there ! You may want to check out possibly overclocking your video card, using MSI Afterburner.or, similar free program. Increasing the "memory clock speed" can RAISE performance and cut export times on GPU effects loaded timelines,or, scaling operations. On my laptop, I export 25% faster doing this. With my NEW  i7 4700 HQ laptop, I export in the range of your CURRENT machine....about 2 to 3 times the length of the original video. PROPERLY SET UP...your desktop machine should BLOW THIS AWAY !!
        Visit the PPBM7 website and test your current setup to possibly identify current bottlenecks,or, performance issues. THEN, RE-TEST it again, after making improvements to your machine to see how it does. Be aware that new codecs are coming (H.265 and HEVC,etc.) which may demand more computer horsepower to edit, as they are even MORE compressed and engineered for "streaming" high quality at a lower bandwidth on the internet. The new Haswell E...with its quad-channel memory, 8 core option, large number of PCI gen. 3 lanes, goes farther in being prepared for 4K and more. Testing by Eric Bowen has shown the newer PPro versions provide MUCH better processing of 4K than older versions.

Maybe you are looking for

  • How to change font size on panels?

    How to change font size on panels?

  • 2 Problems. 1. Black Screen With Only Cursor Showing 2. Explorer.exe blocked.

    Hello Guys and Gals, I am having a problem with my Laptop.  First of all, my screen is black with only a cursor showing (that is not blinking) and I can only run programs through Task Manager. Second, I cannot open 'explorer.exe'.  When I try I get t

  • MacBook screen is broken

    I woke up this morning and took out my MacBook. When I turned it on, I noticed this spider-web crack in the screen (the LED part, not the glass.) What should I do? I'm trying to stay away from Best Buy, as they will rob my wallet and then some.

  • Photo libary changes 'other' to 'photos' in itunes?

    2 iPhone 4S's, one of the restored as new for testing this, restoring did not fix. Both are at 8.1.3. iTunes 12.0.1.26 in Windows 7 Ultimate. Seeing something odd when I turned on photo library on my phone tonight. Before doing so, the phone looked l

  • Has anyone used this 3000mAh battery case?

    Hey guys, check this out. I love my iPhone4 and compared with other phones Apple did a great job with battery life, but I'm a traveler and sometimes I take long runs without getting a chance to charge the phone. Looking into buying this battery case