Weblogic LLR tables RECORDSTR column size issues

Hi ,
We have configured the LLR (Last logging resource to emulate 2 Phase commit XA ) non XA datasouce in our weblogic 10.3.5 domain and I see that weblogic internally creates WL_LLR_<servername> tables in the schema confgured with that LLR datasouce per managed server.
But during some transactions commit we are facing an error stating that RECORDSTR column size (which is default 1000 Bytes) is too small for the COMMIT RECORD. It works fine when we increased the RECORDSTR column to 2000 bytes . But the issue is what is the criteria to increase the column size , our application have a large global transactions and will 2000 bytes will be enough to hold that.
So my question is what actually gets stored within the LLR tables for a transaction , transaction logs or COMMIT record ? , what does COMMIT RECORD actually mean?
Is there a way to specify the table and column length of LLR Tables during creation of LLR data sources itself.

Maybe this can shed some light on things: http://docs.oracle.com/cd/E21764_01/web.1111/e13737/transactions.htm#i1145819

Similar Messages

  • RoboHelp 10 - Table Preferred Column Width Issue

    I converted a RoboHelp 9 project to RoboHelp 10 and noticed that Column 1 in all tables has a preferred width setting of 93% and it is not allowing me to remove this setting. When I uncheck the Preferred width checkbox and click OK, then go back to the Table Properties>Column tab, the Preferred width checkbox is checked again with a width setting of 93%. The column width is not specified in the HTML, it is set to fit to the contents (<col  />). The odd thing is that Column 1 only displays this way in Design view. When previewing and publishing the topic, it displays correctly (fits to contents). This however makes editing content in tables extremely difficult in Design view. Any assistance is appreciated.

    If it doesn't have a width setting of 93%, where are you finding it is set to 93%?
    There was a bug in Rh10 with table widths and I cannot recall the exact details without some testing. I think it was along the lines that in design view there would be problems unless the columns had a width and so did the table.
    I do know that once I set up tables in a specific way, they were also fine in design view. Post back if stuck and I will look but it may not be for a few days.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • How to know  columns and table name  whose column size are modified

    Hi guys
    I want to know which all columns in the tables are modify
    i.e. list of columns and table name whose column size are modified
    Step1 :
    CREATE TABLE employees
    ( employee_number number(5) ,
    employee_name varchar2(50) ,
    department_id number(10)
    CREATE TABLE Supplier
    ( Supplier_number number(5) ,
    Supplier_name varchar2(50) ,
    CREATE TABLE customers
    ( customer_id number(10) not null,
    customer_name varchar2(50),
    address varchar2(50),
    city varchar2(50),
    state varchar2(25),
    zip_code varchar2(10),
    Step2 :
    Alter table employees
    MODIFY employee_number number(10)
    ALTER TABLE supplier
    MODIFY supplier_name varchar2(100)
    step3
    query to dispaly
    columnname table name
    employee_number employees
    supplier_name supplier
    How to know columns and table name whose column size are modified
    could you please provide query
    Thanks in Advance

    09:35:50 SQL> desc dba_objects
    Name                            Null?    Type
    OWNER                                  VARCHAR2(30)
    OBJECT_NAME                             VARCHAR2(128)
    SUBOBJECT_NAME                         VARCHAR2(30)
    OBJECT_ID                             NUMBER
    DATA_OBJECT_ID                         NUMBER
    OBJECT_TYPE                             VARCHAR2(19)
    CREATED                             DATE
    LAST_DDL_TIME                             DATE
    TIMESTAMP                             VARCHAR2(19)
    STATUS                              VARCHAR2(7)
    TEMPORARY                             VARCHAR2(1)
    GENERATED                             VARCHAR2(1)
    SECONDARY                             VARCHAR2(1)
    NAMESPACE                             NUMBER
    EDITION_NAME                             VARCHAR2(30)LAST_DDL_TIME can be utilized

  • What table column size is needed to accomodate Unicode characters

    Hi guys,
    I have encounter something which i dont understand and i hope gurus here will shed some light on me.
    I am running a non-unicode database and i decided to port the data over to a unicode database.
    So
    1) i export the schema out --> data.dmp
    2) then i create the unicode database + create a user
    3) then i import the schema into the database
    during the imp i can see that character conversion will take place.
    During importing of data into the unicode database
    I encounter some error
    saying column size is too small
    so i went to check the row that has the column value that is too large to fit in the table.
    I realise it has some [][][][] data.. so i went to the live non-unicode database and find the row. Indeed it has some [][][][] rubbish data which i feel that someone has inserted other language then english into the database.
    But regardless,
    I went to modify the column size to a larger size, now the row can be accommodated. However the data is still [][][].
    q1) why so ? since now my database is unicode, during the import, this column data [][][] should be converted to unicode already but i still have problem seeing what language it is.
    q2) why at the non-unicode database, the [][][] data can fit into the table column size, but on unicode database, the same table column size need to be increase ?
    q3) while doing more research on unicode, it was said that unicode character takes up 2 byte per character. Alot of my table data are exactly the same size of the table column size.
    E.g Name VARCHAR2(5);
    value - 'Peter'
    Now if converting to unicode, characters will take 2byte instead of 1, isnt 'PETER' going to take up 10byte ( 2 byte per character ),
    why is it that i can still accomodate the data into the table column ?
    q4) now with unicode database up, i will be supporting different language characters around the world. How big should i set my column size to ? the longest a name can get ? or ?
    Thanks guys!

    /// does oracle automatically "look" at the each and individual characters in a word and determine how much byte it should take.
    Characters usually originate from a keyboard, which has an associated keyboard layout and an associated character set encoding (a.k.a code page, a.k.a. encoding). This means, the keyboard driver knows that when a key with a letter "á" on it is pressed on a French keyboard, and the associated character set encoding is MS Code Page 1252 (Oracle name WE8MSWIN1252), then one byte with the value 225 is generated. If the associated character set encoding is UTF-16LE (standard internal Windows encoding), two bytes 225 and 0 are generated. When the generated bytes travel through APIs, they may undergo character set conversions from one encoding to another encoding. The conversion algorithms use translation tables to find out how to translate given byte sequence from one encoding to another encoding. In case of translation from WE8MSWIN1252 to AL32UTF8, Oracle will know that the byte sequence resulting from conversion of the code 225 should be 195 followed by 161. For a Chinese characters, for example when converting it from ZHS16GBK, Oracle knows the resulting sequence as well, and this sequence is usually 3 bytes.
    This is how AL32UTF8 data gets into a database. Now, when Oracle processes a multibyte string, and needs to look at individual characters, for example to count them with LENGTH, or take a substring with SUBSTR, it uses information it has about the structure of the character set. Multibyte character sets are of two type: fixed-width and variable-width. Currently, Oracle supports only one fixed-width multibyte character set in the database: AL16UTF16, which is Oracle's name for Unicode UTF-16BE encoding. It supports this character set for NCHAR/NVARCHAR2/NCLOB data types only. This character set uses two bytes per each character code. To find the next code, 2 is simply added to the string pointer.
    All other Oracle multibyte character sets are variable-width character sets, including AL32UTF8. In most cases, the length of each character code can be determined by looking at its first byte. In AL32UTF8, the number of 1-bits in the most significant positions in the first byte before the first 0-bit tells how many bytes a character has. 0 such bits means 1 byte (such codes are identical to 7-bit ASCII), 2 such bits mean two bytes, 3 bits mean 3 bytes, 4 bits mean four bytes. 1 bit (e.g. the bit sequence 10) starts each second, third or fourth byte of a code.
    In other ASCII-based multibyte character sets, the number of bytes is usually determined by the value range of the first byte. Bytes below 128 means a one-byte code, bytes above 128 begin a two- or three-byte sequence, depending on the range.
    There are also EBCDIC-based (mainframe) multibyte character sets, a.k.a shift-sensitive character sets, where a sequence of two-byte codes is introduced by inserting the SO character (code 14=0x0e) and ended by inserting the SI character (code 15=0x0f). There are also character sets, like ISO-2022-JP, which use more complicated byte sequences to define the length and meaning of byte sequences but Oracle supports them only in limited number of places.
    /// e.g i have a word with 4 character. the 3rd character will be a chinese character..the rest are ascii character
    /// will oracle use 4 byte per character regardless its ascii(english) or chinese
    No.
    /// or it will use 1 byte per english character then 3 byte for the chinese character ? e.g.total - 6 bytes taken
    It will use 6 bytes.
    Thnx,
    Sergiusz

  • Oracle APEX 4.0  -  Interactive Report - Table Column Filter Issue

    Environment: Oracle APEX 4.0  -  Interactive Report - Table Column header Filter Issue
    We have developed an interactive report using Oracle APEX 4.0, which contains a record count of around 3,000 Rows. All the rows values are unique in nature. When we try to filter the same with the help of column header filter option available in the interactive report,We get only 1000 records.
    Could some one help us, why this behaviour under APEX Table Column Header Filter as if it does not display beyond 1000 distinct values.
    Is there a way or workaround on how to get all the records in the column header filter?
    Thanks in advance.
    Krish

    Hi
    Thanks for the advice and this issue has been moved to the below URL
    Oracle APEX 4.0 - Interactive Report - Table Column Filter Issue Posted: No
    Krish

  • Data type column size does not match size of values returned

    I've tried searching but couldn't find what I was looking for, sorry if this has been answered or is a duplicate.
    I am using the (Java) method below to get the column definitions of a table. I am doing this through an ODBC connection (using Oracle ODBC driver) and I have NO CHOICE in the matter.
    The problem I am having is that one particular column has a column size is smaller than the size of the data it purportedly holds (ie attendance_code = varchar size 3 but data value is "Absent". I suspect the data is actually a look up to situation, where my attendance_code column has a value of 'ABS' and Oracle goes and fetchs the "Absent" value.
    My issue is that the meta data returned from ODBC is the actual column definition (ie varchar(3) and not varchar2(10) for example).
    The resultset metadata has a size of 3 for the column in question, and querying the schema also yields 3 for that column.
    How do I resolve this? I do not have access to all_columns table etc. I have to do it using ODBC metadata only.
    I've tried recreating the situation using the Oracle XE DB, but as I am not familiar with Oracle, I do not know how to recreate this scenario (which would then lead me to test other options with ODBC).
    TIA.
    The select I am using is SELECT * FROM <tableName>.
    public ArrayList<SchemaColumn> getColumnsForTable(String catalog,
    String schema,
    String tableName) {
    ArrayList<SchemaColumn> columns = new ArrayList<SchemaColumn>();
    ResultSet rs = null;
    try {
    DatabaseMetaData metaData = connection.getMetaData();
    rs = metaData.getColumns(catalog, schema, tableName, "%");
    SchemaColumn column;
    while (rs.next()) {
    column = new SchemaColumn();
    column.setName(rs.getString("COLUMN_NAME"));
    column.setDataType(rs.getInt("DATA_TYPE"));
    column.setTypeName(rs.getString("TYPE_NAME"));
    column.setColumnSize(rs.getInt("COLUMN_SIZE")); <-------- this is the value that is coming back smaller than the data it actually returns
    column.setDecimalDigits(rs.getInt("DECIMAL_DIGITS"));
    column.setNumPrecisionRadix(rs.getInt("NUM_PREC_RADIX"));
    column.setNullable(rs.getInt("NULLABLE"));
    column.setRemarks(rs.getString("REMARKS"));
    column.setDefaultValue(rs.getString("COLUMN_DEF"));
    column.setSqlDataType(rs.getInt("SQL_DATA_TYPE"));
    column.setSqlDateTimeSub(rs.getInt("SQL_DATETIME_SUB"));
    column.setCharOctetLength(rs.getInt("CHAR_OCTET_LENGTH"));
    column.setOrdinalPosition(rs.getInt("ORDINAL_POSITION"));
    columns.add(column);
    } catch (Exception e) {
    Log.getLogger().error("Could not capture table schema");
    closeResultSet(rs);
    return columns;
    }

    I can't say I've ever seen a case where a column held more data than it was declared as, and would have to file that under "see it to believe it" category.
    Even if that somehow were the case, I'd expect the ODBC driver to return what the column is defined as anyway, not the data. The data changes with every row, but I'd expect the table metadata to be consistent, and refer to the table definition.
    For grins though, can you provide system.out.println output of the metadata, and also a "describe xxxx" from sqlplus if you can, where xxx is the table name?
    ie,
    while (rs.next()) {
    System.out.println("column name is " + rs.getString("COLUMN_NAME"));
    System.out.println("data type is " + rs.getInt("DATA_TYPE"));
    ..etc..
    Greg

  • How to increase the column size of Alv tbale

    Hi All,
    I created an Alv table to display the content of my database table. In one of the column the entire data from my database are not displayed the last few characters are missing. The data type for that column in the database is char. Can any one help me how to increase the column size in my Alv table or any suggestions to resolve this issue.

    Hi Vadiv,
    Try with this..
    DATA: LR_IF_CONTROLLER TYPE REF TO IWCI_SALV_WD_TABLE,
    LR_CMP_USAGE TYPE REF TO IF_WD_COMPONENT_USAGE,
    LR_CMDL TYPE REF TO CL_SALV_WD_CONFIG_TABLE,
    LR_TABLE_SETTING TYPE REF TO IF_SALV_WD_TABLE_SETTINGS.
    LR_CMP_USAGE = WD_THIS->WD_CPUSE_ALV( ).
    IF LR_CMP_USAGE->HAS_ACTIVE_COMPONENT( ) IS INITIAL.
    LR_CMP_USAGE->CREATE_COMPONENT( ).
    ENDIF.
    " get reference to the ALV model
    LR_IF_CONTROLLER = WD_THIS->WD_CPIFC_ALV( ).
    LR_CMDL = LR_IF_CONTROLLER->GET_MODEL( ).
    LR_TABLE_SETTING ?= LR_CMDL.
    " Set column width
    DATA LR_COL TYPE REF TO CL_SALV_WD_COLUMN.
    LR_COL = LR_CMDL->IF_SALV_WD_COLUMN_SETTINGS~GET_COLUMN( 'PERNR' ).
    LR_COL->SET_WIDTH( '70' ) .
    LR_COL = LR_CMDL->IF_SALV_WD_COLUMN_SETTINGS~GET_COLUMN( 'ENAME' ).
    LR_COL->SET_WIDTH( '100' ) .
    LR_TABLE_SETTING->SET_FIXED_TABLE_LAYOUT( ABAP_TRUE ).
    You can refer to webdynpro component SALV_WD_TEST_TABLE_PROPS. Go to the view TABLE and look inside the method SET_COLUMN_SETTINGS. I hope this will help you.
    Cheers,
    Kris.

  • How to Increase the Rows and Columns Size of Bex Query in Enterprise Portal  of SAP  7.3

    Dear All,
    Please let me know the process how to Increase the Rows and Columns  Size of Bex Query in Enterprise Portal  of SAP  7.3 .
    Currently I am getting Only  4 columns and 10 rows in One Page .And I am getting 1,2 etc tabs for both row and column. So i want to increase the column length more than 100  and row length more than 10000.
    Please suggest me a suitable solution to over come this issue.
    Please find the Below screen shot.
    Thanks
    Regards,
    Sai

    Dear All,
    Please find the attached screen shot.
    The report be open with 4 or 5 columns and 5 or 6 rows.
    So, please  let me know how to increase the length of the table.
    Do the needful for me to over come this issue.
    Thanks
    Regards,
    Sai.

  • Reg : Column Size -

    Hi All,
    I've got a doubt regarding alter the column size.
    There's a table column which is currently 255 and I want to increase it to max i.e. 4000 Char
    It is used in many places by various objects - Procs/Packages/Triggers/Views...
    Is it advisable to use Varchar2(4000Char) or using LONG or CLOBS??
    If I use LONG/CLOB now, will it affect the existing objects since they believe the column to be VARCHAR2...?
    Help highly appreciated.
    My Database :
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE     10.2.0.3.0     Production
    TNS for Solaris: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - ProductionThanks in Advance,
    Ranit B.

    ranit B wrote:
    Hi All,
    I've got a doubt regarding alter the column size.
    There's a table column which is currently 255 and I want to increase it to max i.e. 4000 CharBe aware, the Maximum is 4000 bytes not char. If your database is using a multi byte character set, the maximum could actually be as small as 1000 characters (if you have 4 bytes per character).
    It is used in many places by various objects - Procs/Packages/Triggers/Views...
    Is it advisable to use Varchar2(4000Char) or using LONG or CLOBS??Never use LONG. the LONG datatype was deprecated over a decade ago. If you're needing to store large amounts of text, use CLOB.
    If I use LONG/CLOB now, will it affect the existing objects since they believe the column to be VARCHAR2...?It's one of those "it depends" answers, because it depends what the code is doing with it. It's not so much a case of the code/objects believing it to be varchar2, but whether that code or objects can do implicit conversions without issue. If it's just a case of something inserting into it, believing it's a varchar2, then it would work ok e.g.
    SQL> create table mytable (x clob);
    Table created.
    SQL> insert into mytable (x) values ('This is my varchar string');
    1 row created.The problem would obviously be if there is code that is reading the data into a varchar2 variable or datatype and the data exceeds the 4000 byte limit, in which case you will get issues.
    The best way to look at is is... Yes, it will effect existing code/objects, and you need to carry out an impact analysis to see where and how it's used.

  • How to get table and column names thats being used in SQL , that's generating all my reports on SSRS.

    Good day,
    I searched through the forum and cant find anything.
    I have around 300 published reports on SSRS and we are busy migrating to a new system.
    They have already setup their tables on the new system and I need to provide them with a list of table names and column names that are being used currently to generate the 300 reports on SSRS.
    We use various tables and databases to generate these reports, and will take me forever to go through each query to get this info.
    Is it at all possible to write a query in SQL 2008 that will give me all the table names and columns being used?
    Your assistance is greatly appreciated.
    I thank you.
    Andre.

    There's no straightforward method for that I guess. There are couple of things you can use to get these details
    1. query the ReportServer.dbo.Catalog table
    for getting details
    you may use script below for that
    http://gallery.technet.microsoft.com/scriptcenter/42440a6b-c5b1-4acc-9632-d608d1c40a5c
    2. Another method is to run the reports and run sql profiler trace on background to retrieve queries used.
    But in some of these cases the report might be using a procedure and you will get only procedure. Then its upto you to get the other details from procedure like tables used, columns etc
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Report Generation excel column size

      Dear,
    In the project, I save the data in Excel with Report Generation. I would like change just the column size of my Excel report. There is an explanation in the labview help but I have always an error when I run the VI.
    I work with labview 8.2
    A VI is joined with a simple example.
    Regard
    Julien
    Attachments:
    report table graph.vi ‏21 KB

    Julien,
    I realize you only want to modify columns, but you need to put in a start and stop row value other than -1.  Try setting the start and stop rows to 0.

  • Create materialized view with specific column sizes

    Hi all,
    I'm trying to create a materialized view with a specific a column size. Something like
    create materialized view test_mv
    refresh force on demand
    as
    select id,
           cast(my_compound_field as nvarchar2(50))
    from ( select id,
                  field1 || field2 my_compound_field
           from   my_table);But Oracle seems to ignore the cast and takes the maximum size it finds for field1 || field2 in the select query. The resulting table has a column nvarchar2(44) instead of nvarchar2(50).
    This can give a problem when the view is refreshed... there could be new data that exceeds the current size, i.e. where length(field1 || field2) > 44.
    How can I override the column size of a field in a materialized view?
    Edit: Some additional info to clarify my case:
    field1 and field2 are defined as nvarchar2(25). field1 || field2 can theoretically have a length of 50, but there is currently no data in my table that results in that length, the max is 44. I am afraid that there will be data in the future that exceeds 44, resulting in an error when the MV is refreshed!
    Edited by: Pleiadian on Jan 25, 2011 2:06 PM

    Cannot reproduce what you are saying is happening.
    SQL> create table t (a nvarchar2(50), b nvarchar2(50));
    Table created.
    SQL> create materialized view tmv as
      2  select a, b, a || b c from t;
    Materialized view created.
    SQL> desc tmv
    Name                                      Null?    Type
    A                                                  NVARCHAR2(50)
    B                                                  NVARCHAR2(50)
    C                                                  NVARCHAR2(100)
    SQL> drop materialized view tmv;
    Materialized view dropped.
    SQL> create materialized view tmv as
      2  select a, b, substr(a || b, 1, 10) c from t;
    Materialized view created.
    SQL> desc tmv
    Name                                      Null?    Type
    A                                                  NVARCHAR2(50)
    B                                                  NVARCHAR2(50)
    C                                                  NVARCHAR2(10)
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    SQL>Edited by: 3360 on Jan 25, 2011 8:10 AM
    And with data
    SQL> insert into t values ('3123423423143hhshgvcdcvw', 'ydgeew  gdfwe   dfefde  wfjjjjjjj');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> desc tmv
    Name                                      Null?    Type
    A                                                  NVARCHAR2(50)
    B                                                  NVARCHAR2(50)
    C                                                  NVARCHAR2(10)
    SQL> select * from tmv;
    A
    B                                                  C
    3123423423143hhshgvcdcvw
    ydgeew  gdfwe   dfefde  wfjjjjjjj                      3123423423

  • In Alv table, a column is editable mode, but want few cells in read only

    Hi All,
    I have a ALV table which column A and B.
    Both are in editable mode. I want to make fews in column B, to be read only.
    How to make it. Please help me.
    Thanks
    Vimalraj

    hi,
    refer this program,
    *& Report  ZALV_COLOR_DISPLAY_EDIT
    REPORT  zalv_color_display_edit.
    TYPE-POOLS: slis.
    TABLES : zcust_master2.
    INTERNAL TABLE DECLARATION
    TYPES : BEGIN OF wi_zcust_master2,
            zcustid LIKE zcust_master2-zcustid,
            zcustname LIKE zcust_master2-zcustname,
            zaddr LIKE zcust_master2-zaddr,
            zcity LIKE zcust_master2-zcity,
            zstate LIKE zcust_master2-zstate,
            zcountry LIKE zcust_master2-zcountry,
            zphone LIKE zcust_master2-zphone,
            zemail LIKE zcust_master2-zemail,
            zfax LIKE zcust_master2-zfax,
            zstat LIKE zcust_master2-zstat,
            field_style  TYPE lvc_t_styl,
    END OF wi_zcust_master2.
    DATA: it_wi_zcust_master2 TYPE STANDARD TABLE OF wi_zcust_master2
                                                     INITIAL SIZE 0,
          wa_zcust_master2 TYPE wi_zcust_master2.
    *ALV data declarations
    DATA: fieldcatalog TYPE slis_t_fieldcat_alv WITH HEADER LINE.
    DATA: it_fieldcat TYPE lvc_t_fcat,     "slis_t_fieldcat_alv WITH HEADER
    line,
          wa_fieldcat TYPE lvc_s_fcat,
          gd_tab_group TYPE slis_t_sp_group_alv,
          gd_layout    TYPE lvc_s_layo,     "slis_layout_alv,
          gd_repid     LIKE sy-repid.
    START-OF-SELECTION.
      PERFORM data_retrieval.
      PERFORM set_specific_field_attributes.
      PERFORM build_fieldcatalog.
      PERFORM build_layout.
      PERFORM display_alv_report.
    *&      Form  BUILD_FIELDCATALOG
          Build Fieldcatalog for ALV Report
    FORM build_fieldcatalog.
      wa_fieldcat-fieldname   = 'ZCUSTID'.
      wa_fieldcat-scrtext_m   = 'CUSTOMER ID'.
      wa_fieldcat-col_pos     = 0.
      wa_fieldcat-outputlen   = 10.
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZCUSTNAME'.
      wa_fieldcat-scrtext_m   = 'CUSTOMER NAME'.
      wa_fieldcat-col_pos     = 1.
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZADDR'.
      wa_fieldcat-scrtext_m   = 'ADDRESS'.
      wa_fieldcat-col_pos     = 2.
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZCITY'.
      wa_fieldcat-scrtext_m   = 'CITY'.
      wa_fieldcat-col_pos     = 3.
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZSTATE'.
      wa_fieldcat-scrtext_m   = 'STATE'.
      wa_fieldcat-col_pos     = 4.
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZCOUNTRY'.
      wa_fieldcat-scrtext_m   = 'COUNTRY'.
      wa_fieldcat-col_pos     = 5.
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZPHONE'.
      wa_fieldcat-scrtext_m   = 'PHONE NUMBER'.
      wa_fieldcat-col_pos     = 6.
    wa_fieldcat-edit        = 'X'. "sets whole column to be editable
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZEMAIL'.
      wa_fieldcat-scrtext_m   = 'EMAIL'.
      wa_fieldcat-edit        = 'X'. "sets whole column to be editable
      wa_fieldcat-col_pos     = 7.
      wa_fieldcat-outputlen   = 15.
      wa_fieldcat-datatype     = 'CURR'.
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZFAX'.
      wa_fieldcat-scrtext_m   = 'FAX'.
      wa_fieldcat-col_pos     = 8.
      wa_fieldcat-edit        = 'X'. "sets whole column to be editable
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
      wa_fieldcat-fieldname   = 'ZSTAT'.
      wa_fieldcat-scrtext_m   = 'STATUS'.
      wa_fieldcat-col_pos     = 9.
      APPEND wa_fieldcat TO it_fieldcat.
      CLEAR  wa_fieldcat.
    ENDFORM.                    " BUILD_FIELDCATALOG
    *&      Form  BUILD_LAYOUT
          Build layout for ALV grid report
    FORM build_layout.
    Set layout field for field attributes(i.e. input/output)
      gd_layout-stylefname = 'FIELD_STYLE'.
      gd_layout-zebra             = 'X'.
    ENDFORM.                    " BUILD_LAYOUT
    *&      Form  DISPLAY_ALV_REPORT
          Display report using ALV grid
    FORM display_alv_report.
      gd_repid = sy-repid.
    call function 'REUSE_ALV_GRID_DISPLAY'
      CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY_LVC'
        EXPORTING
          i_callback_program = gd_repid
          is_layout_lvc      = gd_layout
          it_fieldcat_lvc    = it_fieldcat
          i_save             = 'X'
        TABLES
          t_outtab           = it_wi_zcust_master2
        EXCEPTIONS
          program_error      = 1
          OTHERS             = 2.
      IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    ENDFORM.                    " DISPLAY_ALV_REPORT
    *&      Form  DATA_RETRIEVAL
          text
    -->  p1        text
    <--  p2        text
    FORM data_retrieval .
      DATA: ld_color(1) TYPE c.
      SELECT zcustid zcustname zaddr zcity zstate zcountry zphone zemail
    zfax zstat UP TO 10 ROWS FROM zcust_master2 INTO CORRESPONDING FIELDS OF
    TABLE it_wi_zcust_master2.
    ENDFORM.                    "data_retrieval
    *&      Form  set_specific_field_attributes
          populate FIELD_STYLE table with specific field attributes
    FORM set_specific_field_attributes .
      DATA ls_stylerow TYPE lvc_s_styl .
      DATA lt_styletab TYPE lvc_t_styl .
    Populate style variable (FIELD_STYLE) with style properties
    The following code sets it to be disabled(display only) if 'ZFAX'
    is NOT INITIAL.
      LOOP AT it_wi_zcust_master2 INTO  wa_zcust_master2.
        IF  wa_zcust_master2-zfax IS NOT INITIAL.
          ls_stylerow-fieldname = 'ZFAX' .
          ls_stylerow-style = cl_gui_alv_grid=>mc_style_disabled.
                                          "set field to disabled
          APPEND ls_stylerow  TO  wa_zcust_master2-field_style.
          MODIFY it_wi_zcust_master2  FROM  wa_zcust_master2.
        ENDIF.
      ENDLOOP.
    ENDFORM.                    "set_specific_field_attributes
    Regards,
    K.Tharani.

  • Gather_table_stats with a method opt of "for all indexed columns size 0"

    I have 9 databases I support that contain the same structure, and very similar data concentrations. We are seeing inconsistent performance in the different databases due to bind variable peeking.. I have tracked it down to the Min and Max values that are gathered during the analyze. I analyze from one cluster, and export/import those statistics into the other clusters.. I then go about locking down the stats gathered. Some of the statistics are on tables that contain transient data (the older data is purged, and new data gets a new PK sequence number).
    Since I am gathering statistics with a 'FOR ALL INDEXED COLUMNS SIZE 1', a min and max value are grabbed. This value is only appropriate for a short period of time, and only for a specific database. I do want oracle to know the density to help calculate, but I don't want cardinality based on whether the current bind values fall in this range..
    Example
    COLUMN PK
    When I analyze the min is 1 and max is 5. I then let the database run, and the new min is 100 and max is 105.. same number of rows, but different min/max. At first a select * from table where pk>=1 and pk <=5 would return a cardinality of 5.. Later, a seelct * from tables where pk>=100 and pk<=105 would return a cardinaility 1.
    Any ideas how to avoid this other than trying set min and max to something myself (like min =1 max = 99999999). ??

    MarkDPowell wrote:
    The Oracle documentation on bind variable peeking said it did not peek without histograms and I cannot remember ever seeing on 9.2 where the trace showed otherwise. Mark,
    see this simple test case run on 9.2.0.8. No histograms, but bind variable peeking, as you can see that the EXPLAIN PLAN output generated by AUTOTRACE differs from the estimated cardinality of the actual plan used at runtime.
    Which documentation do you refer to?
    SQL>
    SQL> alter session set nls_language = 'AMERICAN';
    Session altered.
    SQL>
    SQL> drop table bind_peek_test;
    Table dropped.
    SQL>
    SQL> create table bind_peek_test
      2  as
      3  select
      4             100 as n1
      5           , cast(dbms_random.string('a', 20) as varchar2(20)) as filler
      6  from
      7             dual
      8  connect by
      9             level <= 1000;
    Table created.
    SQL>
    SQL> exec dbms_stats.gather_table_stats(null, 'bind_peek_test', method_opt=>'FOR ALL COLUMNS SIZE 1')
    PL/SQL procedure successfully completed.
    SQL>
    SQL> variable n number
    SQL>
    SQL> variable n2 number
    SQL>
    SQL> alter system flush shared_pool;
    System altered.
    SQL>
    SQL> exec :n := 1; :n2 := 50;
    PL/SQL procedure successfully completed.
    SQL>
    SQL> set autotrace traceonly
    SQL>
    SQL> select * from bind_peek_test where n1 >= :n and n1 <= :n2;
    no rows selected
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1000 Bytes=24
              000)
       1    0   FILTER
       2    1     TABLE ACCESS (FULL) OF 'BIND_PEEK_TEST' (Cost=2 Card=100
              0 Bytes=24000)
    Statistics
            236  recursive calls
              0  db block gets
             35  consistent gets
              0  physical reads
              0  redo size
            299  bytes sent via SQL*Net to client
            372  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
              0  rows processed
    SQL>
    SQL> set autotrace off
    SQL>
    SQL> select
      2             cardinality
      3  from
      4             v$sql_plan
      5  where
      6             cardinality is not null
      7  and      hash_value in (
      8    select
      9            hash_value
    10    from
    11            v$sql
    12    where
    13            sql_text like 'select * from bind_peek_test%'
    14    );
    CARDINALITY
              1
    SQL>
    SQL> alter system flush shared_pool;
    System altered.
    SQL>
    SQL> exec :n := 100; :n2 := 100;
    PL/SQL procedure successfully completed.
    SQL>
    SQL> set autotrace traceonly
    SQL>
    SQL> select * from bind_peek_test where n1 >= :n and n1 <= :n2;
    1000 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1000 Bytes=24
              000)
       1    0   FILTER
       2    1     TABLE ACCESS (FULL) OF 'BIND_PEEK_TEST' (Cost=2 Card=100
              0 Bytes=24000)
    Statistics
            236  recursive calls
              0  db block gets
            102  consistent gets
              0  physical reads
              0  redo size
          34435  bytes sent via SQL*Net to client
           1109  bytes received via SQL*Net from client
             68  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
           1000  rows processed
    SQL>
    SQL> set autotrace off
    SQL>
    SQL> select
      2             cardinality
      3  from
      4             v$sql_plan
      5  where
      6             cardinality is not null
      7  and      hash_value = (
      8    select
      9            hash_value
    10    from
    11            v$sql
    12    where
    13            sql_text like 'select * from bind_peek_test%'
    14    );
    CARDINALITY
           1000
    SQL>
    SQL> spool offRegards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Table css column-header text color not working

    When setting
    -fx-text-fill: red;
    in a css table, it does not change the header to tex to red. Is there another value? .table-view .column-header {                                                                                                                                                                                                                                                                                                                   

    You have to override the following from the caspian.css:
    .table-view .column-header  {
        -fx-text-fill: -fx-selection-bar-text;
        /* TODO: for some reason, this doesn't scale. */
        -fx-font-size: 1.083333em; /* 13pt - 1 more than the default font */
        -fx-size: 25;
        -fx-border-style: solid;
        -fx-border-color:
              Inner border: we have different colours along the top, right, bottom and left.
              Refer to RT-12298 for the spec.
            derive(-fx-base, 80%)
            linear-gradient(to bottom, derive(-fx-base,80%) 20%, derive(-fx-base,-10%) 90%)
            derive(-fx-base, 10%)
            linear-gradient(to bottom, derive(-fx-base,80%) 20%, derive(-fx-base,-10%) 90%),
            /* Outer border: */
            transparent -fx-table-header-border-color -fx-table-header-border-color transparent;
        -fx-border-insets: 0 1 1 0, 0 0 0 0;
        -fx-border-width: 0.083333em, 0.083333em;
    .table-view .column-header-background {
        -fx-background-color: -fx-body-color;
        -fx-padding: 0;
    }

Maybe you are looking for