RowCount for WDA Tables based on SCROLLBAR table rendering

Hi.
We are using a standard WDA table to display some data. Our application doesn't have explicitly the app parameter WDTABLENAVIGATION set, but it is rendering our tables as scrollable (like we were using parameter 'SCROLLBAR'). I think that, probably, it is following a server parameter.
Even if we set 'footervisible' to true, our footer is not displayed. As we need to display rowcount, does anyone have any idea on how to display it?
Regards,
Douglas Frankenberger

>
Douglas Frankenberger wrote:
> Hi.
>
> We are using a standard WDA table to display some data. Our application doesn't have explicitly the app parameter WDTABLENAVIGATION set, but it is rendering our tables as scrollable (like we were using parameter 'SCROLLBAR'). I think that, probably, it is following a server parameter.
>
> Even if we set 'footervisible' to true, our footer is not displayed. As we need to display rowcount, does anyone have any idea on how to display it?
>
>
> Regards,
> Douglas Frankenberger
I assume you are are on NetWeaver 7.0 Enhancement Package 1.  As of 7.01 the paginator option no longer exists and is ignored in all the settings.  Only the Scrollbar is possible.

Similar Messages

  • Customized scrollbar for af:table,af:panelgroup & af:panelbox

    Hi All,
    Im using Jdev 11.1.15 version. My requirement is to customize the scroll bars in Table,PanelGroup & Panelbox. like replacing the background of the scroll with custom image and the scroll bar with custom scroll bar image.
    Any inputs on this much appreciated.
    Thanks in Advance.
    Kris.

    Hi Kris,
    I am able to change the scroll bar colors with below code for document and dialog components. Expecting ADF allows below code can be applied for af:table, af:panelgroup & af:panelbox components.
    af|document{
    scrollbar-arrow-color: #000000;
    scrollbar-3dlight-color: #CCCCCC;
    scrollbar-darkshadow-color: #999999;
    scrollbar-face-color: #D4D0C8;
    scrollbar-highlight-color: #FFFFFF;
    scrollbar-shadow-color: #CCCCCC;
    scrollbar-track-color: #404040;
    af|dialog{
    scrollbar-arrow-color: #000000;
    scrollbar-3dlight-color: #CCCCCC;
    scrollbar-darkshadow-color: #999999;
    scrollbar-face-color: #D4D0C8;
    scrollbar-highlight-color: #FFFFFF;
    scrollbar-shadow-color: #CCCCCC;
    scrollbar-track-color: #404040;
    Thank you.
    Regards,
    Mogilicherla Suman Kumar

  • How to render Inline scrollbars for a table

    Hi All,
    I'm using a large table with nearly 30 columns with the width property of the table set to 2200 and No wrap is set to TRUE on all the columns.
    Is there any possible way to render inline horizontal scrollbar for the table instead of rendering the scrollbars on an OAF page ?
    Please help me.
    Regards
    Srikanth Enuguru

    I cant help you with the Inline scroll bar you are looking for (dont think we have anything like this in OA Framework). But you can ponder over the idea of showing another region using hide/show feature. You can use this region to show all the additional information for which you need so many columns.
    Regards
    Sumit

  • WDA Table element - scroll buttons missing

    Hi !
    Being a relative newbie in WDA, I am trying out the table control [tutorial|http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cb243c45-0801-0010-eb9c-88669007f130] for web dynpro ABAP. Even though I get the results on a browser screen from table SFLIGHT, I cannot see the next screen scroll buttons of 'next page' / 'last page' in NW 7.0 EHP2. I tried inserting the 'toolbar' element and still am not able to see the buttons. Is it a separate setting ?
    The tutorial is all pre NW7.0 EHP2 and seems to come automatically along with the table element.
    Oogway.

    Oogway wrote:
    I am interested in getting the Footer (for accessible mode) buttons at the bottom of the page displayed. Setting the parameter of WDTABLENAVIGATION to PAGINATOR or SCROLLBAR doesn't show any difference as the scrollbar is still visible. However, for tables containing many columns, it is difficult to use the browser scrollbar to get the edge of the table control to use the table scrollbar. It is for this case that the accessble mode buttons at the footer would be really helpful.
    Hi,
    your question should be paginator is missing.
    as per your requirement, you want paginator (footer with buttons) for your table. from NW 7.01 and onwards, srollbar is the default value for table and in lightspeed rendering only scrollbar is used.
    There are still some ways to get back paginator as discussed in thread  row count in table ui element
    and WDTABLENAVIGATION
    But shifting to pagintor may cause performance issue as well.
    I hope this will solve your problem.
    Thanks,
    Chandra
    Edited by: Chandrashekhar Mahajan on Nov 15, 2011 11:41 AM

  • How to come up with a magic number for any table that returns more than 32KB?

    I am in a unique situation where in I am trying to retrieve values from multiple tables and publish them as XML output. The problem is based on the condition a few tables could retrieve data more than 32KB and a few less than 32KB. Less than 32KB is not an issue as XML generation is smooth. The minute it reaches greater than 32KB it generates a run time error. Just wondering if there is any way to ensure that the minute the query's results is greater than 32 kb, it should break say - if the results is 35KB, then I should break that result into 32 KB and 3kb; once done then pass this data to be published as an XML output. This is again not just for one table, but all the tables that are called in the function.
    Is there any way?? I am unable to get any ideas nor have I done anything so complex from production support stand point. Would appreciate if someone can guide me on this.
    The way it is, is as follows:
    I have a table called ctn_pub_cntl
    CREATE TABLE CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id          NUMBER(18)
    ,table_name                  VARCHAR2(50)
    ,last_pub_tms              DATE
    ,queue_name               VARCHAR2(50)
    ,dest_system              VARCHAR2(50)
    ,frequency                  NUMBER(6)
    ,status                      VARCHAR2(8)
    ,record_create_tms          DATE
    ,create_user_id                VARCHAR2(8)
    ,record_update_tms          DATE
    ,update_user_id             VARCHAR2(8)
    ,CONSTRAINT ctn_pub_cntl_id_pk PRIMARY KEY(ctn_pub_cntl_id)
    Data for this is:
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms 
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_SBDVSN'
    ,TO_DATE('10/2/2004 10:17:44PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.TSZ601.UNP'
    ,'SAP'
    ,15
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms 
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_TRACK_SGMNT_DN'
    ,TO_DATE('02/06/2015 9:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms 
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN_DN'
    ,TO_DATE('10/2/2004 10:17:44PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.YRDPLN.INPUT'
    ,'SAP'
    ,30
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms 
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN2_DN'
    ,TO_DATE('02/06/2015 9:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.TSZ601.UNP'
    ,'SAP'
    ,120
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN2_DN'
    ,TO_DATE('04/23/2015 11:50:00PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.YRDPLN.INPUT'
    ,'SAP'
    ,10
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FIXED_PLANT_ASSET'
    ,TO_DATE('04/23/2015 11:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,10
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_OPRLMT'
    ,TO_DATE('03/26/2015 7:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_OPRLMT_SGMNT_DN'
    ,TO_DATE('03/28/2015 12:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    COMMIT;
    Once the above data is inserted and committed, then I created a function in a package:
    CREATE OR REPLACE PACKAGE CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH
    IS
    TYPE tNameTyp IS TABLE OF ctn_pub_cntl.table_name%TYPE INDEX BY BINARY_INTEGER;
    g_tName tNameTyp;
    TYPE tClobTyp IS TABLE OF CLOB INDEX BY BINARY_INTEGER;
    g_tClob tClobTyp;
    FUNCTION GetCtnData(p_nInCtnPubCntlID IN CTN_PUB_CNTL.ctn_pub_cntl_id%TYPE,p_count OUT NUMBER ) RETURN tClobTyp;
    END CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH;
    --Package body
    CREATE OR REPLACE PACKAGE BODY CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH
    IS
         doc           xmldom.DOMDocument;
         main_node     xmldom.DOMNode;
         root_node     xmldom.DOMNode;
         root_elmt     xmldom.DOMElement;
         child_node    xmldom.DOMNode;
         child_elmt    xmldom.DOMElement;
         leaf_node     xmldom.DOMNode;
         elmt_value    xmldom.DOMText;
         tbl_node      xmldom.DOMNode;
         table_data    XMLDOM.DOMDOCUMENTFRAGMENT;
         l_ctx         DBMS_XMLGEN.CTXHANDLE;
         vStrSqlQuery  VARCHAR2(32767);
         l_clob        tClobTyp;
         l_xmltype     XMLTYPE;
    --Local Procedure to build XML header    
    PROCEDURE BuildCPRHeader IS
      BEGIN
        child_elmt := xmldom.createElement(doc, 'PUBLISH_HEADER');
        child_node  := xmldom.appendChild (root_node, xmldom.makeNode (child_elmt));
        child_elmt := xmldom.createElement (doc, 'SOURCE_APLCTN_ID');
        elmt_value := xmldom.createTextNode (doc, 'CTN');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
        child_elmt := xmldom.createElement (doc, 'SOURCE_PRGRM_ID');
        elmt_value := xmldom.createTextNode (doc, 'VALUE');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
        child_elmt := xmldom.createElement (doc, 'SOURCE_CMPNT_ID');
        elmt_value := xmldom.createTextNode (doc, 'VALUE');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
        child_elmt := xmldom.createElement (doc, 'PUBLISH_TMS');
        elmt_value := xmldom.createTextNode (doc, TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
    END BuildCPRHeader;
    --Get table data based on table name
    FUNCTION GetCtnData(p_nInCtnPubCntlID IN CTN_PUB_CNTL.ctn_pub_cntl_id%TYPE,p_Count OUT NUMBER) RETURN tClobTyp IS
        vTblName      ctn_pub_cntl.table_name%TYPE;
        vLastPubTms   ctn_pub_cntl.last_pub_tms%TYPE;
    BEGIN
                g_vProcedureName:='GetCtnData';   
                g_vTableName:='CTN_PUB_CNTL';
            SELECT table_name,last_pub_tms
            INTO   vTblName, vLastPubTms
            FROM   CTN_PUB_CNTL
            WHERE  ctn_pub_cntl_id=p_nInCtnPubCntlID;
        -- Start the XML Message generation
            doc := xmldom.newDOMDocument;
            main_node := xmldom.makeNode(doc);
            root_elmt := xmldom.createElement(doc, 'PUBLISH');
            root_node := xmldom.appendChild(main_node, xmldom.makeNode(root_elmt));
          --Append Table Data as Publish Header
            BuildCPRHeader;
          --Append Table Data as Publish Body
           child_elmt := xmldom.createElement(doc, 'PUBLISH_BODY');
           leaf_node  := xmldom.appendChild (root_node, xmldom.makeNode(child_elmt));
           DBMS_SESSION.SET_NLS('NLS_DATE_FORMAT','''YYYY:MM:DD HH24:MI:SS''');
           vStrSqlQuery := 'SELECT * FROM ' || vTblName
                          || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
                        --  ||  ' AND rownum < 16'
          DBMS_OUTPUT.PUT_LINE(vStrSqlQuery);
           l_ctx  := DBMS_XMLGEN.NEWCONTEXT(vStrSqlQuery);
          DBMS_XMLGEN.SETNULLHANDLING(l_ctx, 0);
          DBMS_XMLGEN.SETROWSETTAG(l_ctx, vTblName);
          -- Append Table Data as XML Fragment
          l_clob(1):=DBMS_XMLGEN.GETXML(l_ctx); 
          elmt_value := xmldom.createTextNode (doc, l_clob(1));
         leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
         xmldom.writeToBuffer (doc, l_clob(1));
         l_clob(1):=REPLACE(l_clob(1),'&lt;?xml version=&quot;1.0&quot;?&gt;', NULL);
         l_clob(1):=REPLACE(l_clob(1),'&lt;', '<');
         l_clob(1):=REPLACE(l_clob(1),'&gt;', '>');
         RETURN l_clob;
         DBMS_OUTPUT.put_line('Answer is' ||l_clob(1));
         EXCEPTION
            WHEN NO_DATA_FOUND THEN
            DBMS_OUTPUT.put_line('There is no data with' || SQLERRM);
            g_vProcedureName:='GetCtnData';
            g_vTableName:='CTN_PUB_CNTL';
            g_vErrorMessage:=SQLERRM|| g_vErrorMessage;
            g_nSqlCd:=SQLCODE;
            ctn_log_error('ERROR',g_vErrorMessage,'SELECT',g_nSqlCd,p_nInCtnPubCntlID,g_vPackageName,g_vProcedureName,g_vTableName);
            WHEN OTHERS THEN
           DBMS_OUTPUT.PUT_LINE('ERROR : ' || SQLERRM);
           ctn_log_error('ERROR',g_vErrorMessage,'OTHERS',g_nSqlCd,p_nInCtnPubCntlID,g_vPackageName,g_vProcedureName,g_vTableName);
    END GetCtnData;
    PROCEDURE printClob (result IN OUT NOCOPY CLOB) IS
        xmlstr   VARCHAR2 (32767);
        line     VARCHAR2 (2000);
    BEGIN
        xmlstr := DBMS_LOB.SUBSTR (result, 32767);
        LOOP
           EXIT WHEN xmlstr IS NULL;
           line := SUBSTR (xmlstr, 1, INSTR (xmlstr, CHR (10)) - 1);
           DBMS_OUTPUT.put_line (line);
           xmlstr := SUBSTR (xmlstr, INSTR (xmlstr, CHR (10)) + 1);
        END LOOP;
    END printClob;
    END CTN_PUB_CNTL_EXTRACT_PUBLISH;
    If you notice my query:
    vStrSqlQuery := 'SELECT * FROM ' || vTblName
                          || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
                         ||  ' AND rownum < 16'
    The minute I comment
    ||  ' AND rownum < 16' ;
    , it throws an error because this query returns around 600 rows and all of those rows need to be published as XML and the tragedy is that there is a C program in between as well i.e. C will call my packged functions and then will do all the processing. Once this is done will pass the results back to C program. So obviously C does not recognise CLOB and somewhere in the process I have to convert the CLOB to VARCHAR or instead of CLOB I have to use VARCHAR array as a return type. This is my challenge.
    Anyone that can help me to find out the required magic number and also a brief know how, I would appreciate that. Many thanks in advance.

    Not sure I understand which part is failing.
    Is it the C program calling your packaged function? Or does the error occur in the PL/SQL code, in which case you should be able to pinpoint where it's wrong?
    A few comments :
    1) Using DOM to build XML out of relational data? What for? Use SQL/XML functions.
    2) Giving sample data is usually great, but it's not useful here since we can't run your code. We're missing the base tables.
    3) This is wrong :
    vStrSqlQuery := 'SELECT * FROM ' || vTblName                     || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
    A bind variable should be used here for the date.
    4) This is wrong :
    elmt_value := xmldom.createTextNode (doc, l_clob(1));
    createTextNode does not support CLOB so it will fail as soon as the CLOB you're trying to pass exceeds 32k.
    Maybe that's the problem you're referring to?
    5) This is most wrong :
         l_clob(1):=REPLACE(l_clob(1),'&lt;?xml version=&quot;1.0&quot;?&gt;', NULL); 
         l_clob(1):=REPLACE(l_clob(1),'&lt;', '<'); 
         l_clob(1):=REPLACE(l_clob(1),'&gt;', '>'); 
    I understand what you're trying to do but it's not the correct way.
    You're trying to convert a text() node representing XML in escaped form back to XML content.
    The problem is that there are other things to take care of besides just '&lt;' and '&gt;'.
    If you want to insert an XML node into an existing document, treat that as an XML node, not as a string.
    Anyway,
    Anyone that can help me to find out the required magic number
    That would be a bad idea. Fix what needs to be fixed.
    And please clearly state which part is failing : the C program or the PL/SQL code?
    I'd vote for PL/SQL, as pointed out in [4].

  • [ADF Help] How to create a view for multiple tables

    Hi,
    I am using Jdeveloper 11G and ADF framework, and trying to create a view to update multiple tables.
    ex:
    Table A has these fields: ID, Name
    Table B has these fields: ID, Address
    A.ID and B.ID are primary keys.
    B.ID has FK relationship with A.ID
    (basically, these tables have one-to-one relation)
    I want to create a view object, which contains these fields: B.ID (or A.ID), A.Name, B.Address.
    So I can execute C,R,U,D for both tables.
    I create these tables in DB, and create entity objects for these tables.
    So there are 2 entity objects and 1 association.
    Then I create a view object based on B and add fields of A into the view:
    If the association is not a "Composition Association",
    when I run the model ("Oracle Business Component Browser") and try to insert new data, fields of A can't edit.
    If the association is a "Composition Association", and click the insert button, I will get
    "oracle.jbo.InvalidOwnerException: JBO-25030: Failed to find or invalidate owning entity"
    If I create a view object based on A and add filed of B into the view:
    When I run the model and try to insert new data, fields of B can't edit, no matter the association is or is not a composition association.
    So... how can I create a view for multiple tables correctly?
    Thanks for any advices!
    Here are some pictures about my problem, if there is any unclear point, please let me know.
    http://leonjava.blogspot.com/2009_10_01_archive.html
    (A is Prod, B is CpuSocket)
    Edited by: user8093176 on Oct 25, 2009 12:29 AM

    Hi Branislav,
    Thanks, but the result is same ....
    In the step 2 of creating view object, I can select entity objects to be added in to the view.
    If I select A first, and then select B (the "Source Usage" of B is A), then finishing the wizards.
    When I try to create a new record in the view, I can't edit any properties of B (those files are disabled).
    If I select B first, and then select A in crating view object, the result is similar ...
    Thanks for any further suggestion.
    Leon

  • Single row editing for multiple tables in a single page

    Hi!
    I have split a table with many many columns into more tables with a lower number of columns. There is an ID column (sequence generated) that is Primary Key common for all tables. Obviously, the relationship between these tables is 1 to 1, so it’s not about having to build master-detail pages.
    What I need now is to edit all these tables in a single page: one single row from each table. The Automated Row Fetch & Automatic Row Processing (DML) processes used for the old table must be replaced by something else.
    Is there a way to use in a single page more pairs of such processes for more tables?
    I want to avoid manually writing processes with lot of PL/SQL code to handle row fetch & update.
    Keep in mind that the page will have an ID item that can be used to identify unique rows in all my tables.
    Thanks,
    Sorin

    I have split a table with many many columns into more tables with a lower number of columns
    IMHO that is a very bad idea. There is nothing wrong with having many columns in a table. If you would like to split them to better "categorize" them, just create views on the table and expose different subsets of the columns.
    I want to avoid manually writing processes with lot of PL/SQL code to handle row fetch & update.
    That is unavoidable given how you have designed the underlying tables.
    You could try creating a view that joins all the tables by the PK and creating a form based on that view. Then write a INSTEAD OF trigger on the view that "doles out" the DML on the view to each of the underlying tables.

  • How to get row count(*) for each table that matches a pattern

    I have the following query that returns all tables that match a pattern (tablename_ and then 4 digits). I also want to return the row counts for these tables.
    Currently a single column is returned: tablename. I want to add the column RowCount.
    DECLARE @SQLCommand nvarchar(4000)
    DECLARE @TableName varchar(128)
    SET @TableName = 'ods_TTstat_master' --<<<<<< change this to a table name
    SET @SQLCommand = 'SELECT [name] as zhistTables FROM dbo.sysobjects WHERE name like ''%' + @TableName + '%'' and objectproperty(id,N''IsUserTable'')=1 ORDER BY name DESC'
    EXEC sp_executesql @SQLCommand

    The like operator requires a string operand.
    http://msdn.microsoft.com/en-us/library/ms179859.aspx
    Example:
    DECLARE @Like varchar(50) = '%frame%';
    SELECT * FROM Production.Product WHERE Name like @Like;
    -- (79 row(s) affected)
    For variable use, apply dynamic SQL:
    http://www.sqlusa.com/bestpractices/datetimeconversion/
    Rows count all tables:
    http://www.sqlusa.com/bestpractices2005/alltablesrowcount/
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Remote key for lookup tables

    Hi,
    I need some advice on remote keys for lookup tables.
    We have loaded lookup data from several client system into the MDM repository. Each of the client system can have diffferences in the lookup values. What we need to do is to enable the keymappings so that the syndicator would know which value belongs to which system.
    The tricky part is. We haven't managed to send out the values based on the remote keys. We do <b></b>not<b></b> want to send the lookup tables themselves but the actually main table records. All lookup data should be checked at the point of the syndication and only the used lookup values that orginally came from one system should be send to that particular system. Otherwise they should be tag should be blank.
    Is this the right approach to handle this demand or is there a different way to take care of this? What would be the right settings in the syndicator?
    Help will be rewarded.
    Thank you very much
    best regards
    Nicolas

    Hi Andreas,
    that is correct. Let's take two examples:
    1) regions
    2) Sales Area data (qualified lookup data)
    Both tables are filled and loaded directly from the R/3s. So you would already know which value belongs to which system.
    The problem that I have is that we will not map the remote key from the main table because it will be blank for new created master data (Centralization scenario). Therefore we cannot map the remote key from the attached lookup tables, can we?
    The remote key will only work for lookup tables if the remote key of the actual master data is mapped. Since we don't have the remote key (local customer ID form R/3) in MDM and since we do not create it at the point of the syndication... how would the SAP standard scenario would look like for that?
    This is nothing extraordinary it's just a standard centralization scneario.
    Please advice.
    Thanks alot
    best regards
    Nicolas

  • Can we use single DB Adapter for multiple tables

    Hi,
    I have 3 tables. ONE is the Master table and other 2 tables having Primary and foreign key relationships with the Master Table.
    Data is enter in the Tables 2 & 3 (based on Master Table Primary key)
    I have to use a DB Adapter in BPEL Process which will be in Transaction (what type of Transaction is Best in such a Scenario).
    - How do i implement it in BPEL.
    - Can i use a Single DB Adapter for Multiple Tables ? (*Views* should not be created for tables 1, 2 & 3),
    as we want to avoid using multiple DB Adapters, as in future Tables may increase or decrease...
    - while in Transaction if Table 2 fails in Inserting / Updating, it should roll back on error and also if Table 3 fails in Inserting / Updating, it should roll back table 2 and Table 3.
    - how access to concurrent users is maintained / managed in such a scenarios.
    But everything should be done using Single DB Adapter without creating a separate view.
    Is this Possible in BPEL..
    If not, How do we design, Implementation such a BPEL Process in Transaction with a best in Performance (both BPEL Process & DB Resources)
    thanks in advance
    thanks & regards,
    anvv sharma

    Hi,
    Thanks for your quick response.
    I have few quires.
    If we use PL/SQL, how do we implement transaction and Exception Handling appropriately based on the Error / Exception in BPEL.
    If the Table 2 or Table 3 gives out Error / Exception, i have to handle it separately and intimate accordingly.
    Suppose, if Table 2 got Error / Exception - Error Handling would be 'Roll back due to error in Table2'
    Thanks & regads,

  • How to find out last updatde timestamp for a table

    Hi All,
    I want to find out the last updated time stamp of a table. Is there some thing available with the sys tables. Is there something can be done with user_objects.
    Or, is there any need to write pl/sql code to get the last updated timestamp for a table.
    Please provide some inputs.
    regards,
    Naresh

    Hi Oleg,
    Select * from user_objects.I belieive it gives you the last_ddl_time not the time a update or delete was done.
    Not sure if we have this feature in any new release.
    I am working on Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
    to the OP
    I will suggest if you planto audit then use the audit feature based on your DB version.
    Or to make it more simple have a column called modified_date and update this column whenever you update the record.
    Regards,
    Bhushan

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • Applying table scroll bar for only table rows with table columns fixed.

    hi oa gurus,
    i had implemented table scroll bars using oarawtextbean , there is no problem in vertical and horizontal scroll bar working its working fine. but the requirement is i need to scroll only the table rows with table columns fixed. so , how to achieve the table scroll for only table datas neglecting table headers.
    the code for vertical and horizontal bars scroll is like this,
    OARawTextBean ors = (OARawTextBean)webBean.findChildRecursive("raw1");
    ors.setText(div id=tabledivid style=height:500px;width:100%; overflow:auto>);
    OARawTextBean ore = (OARawTextBean)webBean.findChildRecursive("raw2");
    ore.setText("</div>");
    where raw1 and raw2 are rawtextbean created above and below of the table . but i dont know hoow to apply this only for table rows neglecting table columns , can anybody give any ideas.
    pelase this is very urgent , can u help me in this regards
    thanks
    Edited by: user630121 on Sep 29, 2008 5:17 AM
    Edited by: user630121 on Sep 29, 2008 5:18 AM

    hi,
    I have a similar task to do... Only to apply scrollbar at the table level.
    I tried using the above mentioned but I am facing Null Pointer Exception..
    Please explain about raw1 and raw2
    Rahul

  • Could anyone provide sample of WebDynpro Java TableFilter for Dynamic Table

    Hi
    I have a dynamic table I have given a context node as a data source and it executes a Web Service and populates itself based on the resultset of query this node contains no of columns and rows
    The table changes dynamically everytime the user selects different table type
    When I created a TableFilter I used IWDTableColumn from the current view and looped though the context elements and binded the values of attributes of each context element to this column inside a loop. TableSort works, But I  am not able to get the Filter working for the Dynamic Table , when I click on Filter Button the Filter does not filterand  just sorts.
    Not sure what is wrong but if someone can give some inputs,leads I will highly appreciate if some one has  a  sample of WebDynpro Java TableFilter for Dynamic Table
    Thanks in advance,
    Ragu.

    Hi Ragu,
    Please check the link for table filtering.
    http://wiki.sdn.sap.com/wiki/display/WDJava/GenericTableFilter+Code
    since your dynamic table uses webservice I assume it has fixed context attributes, just put the filter into wdDoModify method everytime you execute/trigger the webservice.
    Regards,

  • How to set initial column widths for a table

    What I'd like to do is to control the initial column widths for a table.
    I'm building the table by using a class which extends AbstractTableModel. This class takes care of setting the headers and reading the data for the table.
    If I do the following:
    VarTableModel vtm = new VarTableModel(vi);
    JTable jt = new JTable(vtm);
    JScrollPane jsp = new JScrollPane(jt);
    frame.getContentPane().add(jsp);
    frame.setVisible(true);
    I will see a table in my window. The widths of the columns are equal and are a function of the horizontal dimension of the window.
    I have tried to set the width of a column by doing the following:
    TableColumn aColumn;
    aColumn = jt.getColumn(vtm.getColumnName(0));
    aColumn.setWidth(40);
    But this has no effect on what gets displayed. I can force the widths that I want by using 'setMaxWidth' but this has the unfortunate side effect of not allowing the user to make the column wider if they want.
    What I'd like is a way to specify the widths of the columns when initially displayed and then let the user adjust to their liking.
    I'm sure that there is a way to do this, but I don't seem to understand where to intervene in the process to produce the effect that I want.
    Any help or suggestions will be greatly appreciated!

    Table.setAutoResizeMode(JTable.AUTO_RESIZE_OFF);
    add this line and try ur codeThanks for the suggestion! Unfortunately, it doesn't seem to do the trick. Here is an abbreviated segment of my code:
    vtm = new VarTableModel(vi);
    JTable jt = new JTable(vtm);
    // we want a horizontal scrollbar, so turn resizing off
    // and we want to control column widths
    jt.setAutoResizeMode(JTable.AUTO_RESIZE_OFF);
    aColumn = jt.getColumn(vtm.getColumnName(0));
    aColumn.setWidth(40);
    . // same kind of thing for each column
    aColumn = jt.getColumn(vtm.getColumnName(4));
    aColumn.setWidth(400);
    JScrollPane jsp = new JScrollPane(jt);
    frame.getContentPane().add(jsp);
    frame.setVisible(true);
    If I do the above, the table gets displayed with each of the five columns the exact same width. So, the sizing of the columns is happening somewhere else despite turning auto resizing off.
    Where is that occurring and how can I intervene so that I can control the widths of the columns on initial display and still allow the user to adjust the widths if they so choose?

Maybe you are looking for

  • HELP! How to install 11.5.10 on AIX5.2

    When I run ./rapidwiz, it shows the message as follow. # ./rapidwiz Rapid Install Wizard is validating your file system...... >> The 806 installer product will use the /etc/oratab file. >> Creating this file now and setting write permissions... >> Ra

  • MRP calculations

    Hi guys. I have problem with MRP planning. Example: 1. The quantity of the material X on the stock is zero. 2. I created one reservation for 5 pieces of the material X for 10.10.2007 3. I run planning. Purchace requirement for 02.10.2007 is created f

  • Transport a document in KM

    Hello , I am on EP7.0 NW04s and uploaded a pdf document in the KM repository in the DEV system ...now I need to transport it to the QAT system's KM repository.... what's the best way to do this....? Any help would be highly appreciated.

  • OAS and OID

    I am trying to install OAS10g and having difficulty. I installed the 'Oracle Infastructure 10g' then tried to install the 'Oracle Application Server 10g' and got an error message saying the the infastructure was already installed and it could not con

  • Range based Flex field validation based on a Lookup table

    Hi all, I am trying to create a validation in one of the flex field under HRMS Application. Table: PAY_ELEMENT_ENTRIES_F Validation is pretty simple - but i am struggling to implement. Assume there exists two flex fields; Field1 contains State Inform