Dba_scheduler_jobs table

I've noticed that the gather_stats_job (job class: AUTO_TASKS_JOB_CLASS) record in dba_scheduler_jobs does not have an entry for next_run_date. Should it?

Hi,
Since the schedule type is window group there will not be a next run date in the job. The job is scheduled to run when any window in the window group specified in schedule_name opens.
You can query dba_scheduler_window_groups to see when is the next time a window in the window group will open .
If you want more info you can query dba_scheduler_wingroup_members to see what windows are in the window group and then dba_scheduler_windows to see when those windows are scheduled for.
Hope this helps,
Ravi.

Similar Messages

  • How to schedule a job which needs to run evry day 1(AM) o clk?

    begin
    DBMS_SCHEDULER.create_job (
    job_name=> 'BJAZPROPMAINTAIN',
    job_type=> 'PLSQL_BLOCK',
    job_action=> schemaname.schedule_procedure;',
    start_date=> '02-aug-08 01:00:00 PM',
    repeat_interval=> 'FREQ=DAILY; BYHOUR=01',
    enabled =>TRUE,
    auto_drop=>FALSE);
    end;
    Hi all,
    i want to schedule a job which needs to be run every day one o clock early morning i haven't set the job_scheduler before this. by searching thru net and prev scheduler coding i have written the above code here for running evry day early morning 1 o clock i m little bit of confused in the time
    repeat_interval=>'FREQ=DAILY;BYHOUR=01'; whether is is correct one or wrong?
    and also there are some other job is scheduled in the same time . will it create any problem of executing at the sametime or we need to change the timing of 1:15 like that?
    please advise me..

    Thanks a lot so it will be executing every night 1 o clock am i right?
    It should.But I shall say that schedule it and than only we can be sure about it.About the timing part, its correct syntatically.
    i saw that job_priority column in dba_scheduler_jobs table but dont know what it does?
    and also how can fetch this job scheduler sid,serial# i checked v$session but how to correlate this ..
    please explain me
    In schedulerjobs,there is a column ,client_id.You can map it to the sid from the V$session.I don't have a box running Oracle at the moment so I wont be test it for you.Do it and post feedback.
    what will happen if more than one job is scheduled in the sametime
    i think for this only we set the priority on the two which one needs to be first exec(depends on the high priority)
    let me know about this.
    Jobs are prioritized by two parts,within the class they are a part of and individualy.If you have two jobs in the same class than they can be make run with a different priority with the priority clause set within them.This has a number which start from 1 meaning highest priority.So if there are two jobs scheduled for the same time,you need to check which job class they fall into. If they are in the same class than you have to change the priority of them.
    I suggest you read the books,they cover all these topics in much more detail.
    Also there is a dedicated forum about Scheduler.In future for Scheduler regarded questions, you can visit there.
    Scheduler
    Aman....

  • Incorrect start time

    Hi,
    I have a scheduler which runs a procedure every day at 9am. The code looks like this:
    BEGIN
    DBMS_SCHEDULER.CREATE_JOB(JOB_NAME => 'my_job_scheduler,
    JOB_TYPE => 'STORED_PROCEDURE',
    JOB_ACTION => 'my_procedure',
    START_DATE => SYSTIMESTAMP,
    REPEAT_INTERVAL => 'FREQ=DAILY; BYHOUR=9',
    ENABLED => TRUE,
    AUTO_DROP => TRUE,
    END;
    The job runs successfully and each time it takes about 3 seconds.
    However, when I look in the dba_scheduler_jobs table, the last run date and start run date is at 9.11 everyday and not 9.00.
    Any idea why that is?
    Thanks
    Venus

    also note that the default for start_date is dbms_scheduler.stime
    dbms_scheduler.stime reports the system time (systimestamp) in
    the default timezone the scheduler is using.
    If this is the correct timezone you wish to use for the job you can
    leaf start_date null.
    Also you can add to the current date to run something in the future like
    dbms_scheduler.stime + interval '1' hour
    SQL> select dbms_scheduler.stime from dual;
    STIME
    06-SEP-07 03.19.40.488919000 PM PST8PDT
    SQL> select dbms_scheduler.stime + interval '1' hour from dual;
    DBMS_SCHEDULER.STIME+INTERVAL'1'HOUR
    06-SEP-07 04.20.11.188552000 PM PST8PDT
    SQL>

  • Dba_scheduler_jobs for clearing the sys audit table

    Hi.
    can u please tell me how to purge a sys.aud$ table records once its reached more than once lakh ... these 1 lakh is the threshold and other more data will be peridiodically purged by automated dba_scheduler_jobs.
    My db is - 10g
    so i need to always retain 1lakh rows on the table..extra data will be purged...once breaching the threshold...i don know how to write..since i am new to this oracle.can anyone help?
    Thanks

    See these MOS Docs
    How To Shrink AUD$ IN ORACLE 10G (Doc ID 1080112.1)
    New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information (Doc ID 731908.1)
    How to Truncate, Delete, or Purge Rows from the Audit Trail Table SYS.AUD$ (Doc ID 73408.1)
    Srini

  • OEM tables for scheduled jobs

    Hi,
    What are the DB tables in which OEM is saving scheduled jobs that one can add inside the OEM web interface?
    I have more than 100 jobs scheduled inside OEM and I want to get a report with job name, what that job is doing, etc. The kind of info that I can find in DBA_SCHEDULE% tables inside a DB.
    I tried to get the info from DBA_SCHEDULER_JOBS but the jobs in this view are for DB only, and not the jobs that OEM is running against the databases registered with OEM.
    Thanks,
    Daniel

    See if you find the information you want in the MGMT$JOB* views.
    Eric

  • Partitioned Incremental Table - no stats gathered on new partitions

    Dear Gurus
    Hoping that someone can point me in the right direction to trouble-shoot. Version Enterprise 11.1.0.7 AIX.
    Range partitioned table with hash sub-partitions.
    Automatic stats gather is on.
    dba_tables shows global stats YES analyzed 06/09/2011 (when first analyzed on migration of data) and dba_tab_partitions shows most partitions analyzed at that date and most others up until 10/10/2011 - done by the automatically by the weekend stats_gather scheduled job.
    46 new partitions added in the last few months but no stats gathered on in dba_tab_partitions and dba_table last_analyzed says 06/09/2011 - the date it was first analyzed manually gathering stats rather than using auto stats gatherer.
    Checked dbms_stats.get_prefs set to incremental and all the default values recommended by Oracle are set including publish = TRUE.
    dba_tab_partitions has no values in num_rows, last_analyzed etc.
    dba_tab_modifications has no values next to the new partitions but shows inserts as being 8 million approx per partition - no deletes or updates.
    dba_tab_statistics has no values next to the new partitions. All other partitions are marked as NO in the stale column.
    checked the dbms_stats job history window - and it showed that the window for stats gathering stopped within the Window automatically allowed.
    Looked at Grid Control - the stats gather for the table started at 6am Saturday morning and closed 2am Monday morning.
    Checked the recommended Window - and it stopped analyzing that table at 2am exactly having tried to analyze it since Saturday morning at 6am.
    Had expected that as the table was in incremental mode - it wouldn't have timed out and the new partitions would have been analyzed within the window.
    The job_queue_processes on the database = 1.
    Increased the job_queue_processes on the database = 2.
    Had been told that the original stats had taken 3 days in total to gather so via GRID - scheduled a dbms_scheduler (10.2.0.4) - to gather stats on that table over a bank holiday weekend - but asked management to start it 24 hours earlier to take account of extra time.
    The Oracle defaults were accepted (and as recommended in various seminars and whilte papers) - except CASCADE - although I wanted the indexes to be analyzed - I decided that was icing on the cake I couldn't afford).
    Went to work - 24 hours later - checked dba_scheduler_tasks_job running. Checked stats on dba_tab_stats and tba tablestats nothing had changed. I had expected to see partition stats for those not gathered first - but quick check of Grid - and it was doing a select via full table scan - and still on the first datafile!! Some have suggested to watchout for the DELETE taking along time - but I only saw evidence of the SELECT - so ran an AWR report - and sure enough full table scan on the whole table. Although the weekend gather stats job was also in operation - it wasn't doing my table - but was definitely running against others.
    So I checked the last_analyzed on other tables - one of them is a partitioned table - and they were getting up-to-date stats. But the tables and partitions are ridiculously small in comparison to the table I was focussed on.
    Next day I came in checked the dba_scheduler_job log and my job had completed within 24 hours and completed successfully.
    Horrors of horrors - none of the stats had changed one bit in any view I looked at.
    I got my excel spreadsheet out - and worked out whether because there was less than 10% changed - and I'd accepted the defaults - that was why there was nothing in the dba_tables to reflect it had last been analyzed when I asked it to.
    My stats roughly worked out showed that they were around the 20% mark - so the gather_table stats should have picked that up and gathered stats for the new partitions? There was nothing in evidence on any views at all.
    I scheduled the job via GRID 10.2.04 for an Oracle database using incremental stats introduced in 11.1.0.7 - is there a problem at that level?
    There are bugs I understand with incremental tables and gathering statistics in 11.1.0.7 which are resolved in 11.2.0 - however we've applied all the CPU until April of last year - it's possible that as we are so behind - we've missed stuff?
    Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
    I'd rather find a solution than save my reputation!!
    Thanks for anyone who replies - I'm not online at work so can't always give you my exact commands done - but hopefully you'll give me a few pointers of where to look next?
    Thanks!!!!!!!!!!!!!

    Save the attitude for your friends and family - it isn't appropriate on the forum.
    >
    I did exactly what it said on the tin:
    >
    Maybe 'tin' has some meaning for you but I have never heard of it when discussing
    an Oracle issue or problem and I have been doing this for over 25 years.
    >
    but obviously cannot subscribe to individual names:
    >
    Same with this. No idea what 'subscribe to individual names' means.
    >
    When I said defaults - I really did mean the defaults given by Oracle - not some made up defaults by me - I thought that by putting Oracle in my text - there - would enable people to realise what the defaults were.
    If you are suggesting that in all posts - I should put the Oracle defaults in name becuause the gurus on the site do not know them - then please let me know as I have wrongly assumed that I am asking questions to gurus who know this suff inside out.
    Clearly I have got this site wong.
    >
    Yes - you have got this site wrong. Putting 'Oracle' in the text doesn't enable people to realize
    what the defaults in your specific environment are.
    There is not a guru that I know of,
    and that includes Tom Kyte, Jonathan Lewis and many others, that can tell
    you, site unseen, what default values are in play in your specific environment
    given only the information you provided in your post.
    What is, or isn't a 'default' can often be changed at either the system or session level.0
    Can we make an educated guess about what the default value for a parameter might be?
    Of course - but that IS NOT how you troubleshoot.
    The first rule of troubleshooting is DO NOT MAKE ANY ASSUMPTIONS.
    The second rule is to gather all of the facts possible about the reported problem, its symptoms
    and its possible causes.
    These facts include determining EXACTLY what steps and commands the user performed.
    Next you post the prototype for stats
    DBMS_STATS.GATHER_TABLE_STATS (
    ownname VARCHAR2,
    tabname VARCHAR2,
    partname VARCHAR2 DEFAULT NULL,
    estimate_percent NUMBER DEFAULT to_estimate_percent_type
    (get_param('ESTIMATE_PERCENT')),
    block_sample BOOLEAN DEFAULT FALSE,
    method_opt VARCHAR2 DEFAULT get_param('METHOD_OPT'),
    degree NUMBER DEFAULT to_degree_type(get_param('DEGREE')),
    granularity VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),
    cascade BOOLEAN DEFAULT to_cascade_type(get_param('CASCADE')),
    stattab VARCHAR2 DEFAULT NULL,
    statid VARCHAR2 DEFAULT NULL,
    statown VARCHAR2 DEFAULT NULL,
    no_invalidate BOOLEAN DEFAULT to_no_invalidate_type (
    get_param('NO_INVALIDATE')),So what exactly is the value for GRANULARITY? Do you know?
    Well it can make a big difference. If you don't know you need to find out.
    >
    As mentioned earlier - I accepted all the "defaults".
    >
    Saying 'I used the default' only helps WHEN YOU KNOW WHAT THE DEFAULT VALUES ARE!
    Now can we get back to the issue?
    If you had read the excerpt I provided you should have noticed that the values
    used for GRANULARITY and INCREMENTAL have a significant influence on the stats gathered.
    And you should have noticed that the excerpt mentions full table scans exactly like yours.
    So even though you said this
    >
    Had expected that as the table was in incremental mode
    >
    Why did you expect this? You said you used all default values. The excerpt I provided
    says the default value for INCREMENTAL is FALSE. That doesn't jibe with your expectation.
    So did you check to see what INCREMENTAL was set to? Why not? That is part of troubleshooting.
    You form a hypothesis. You gather the facts; one of which is that you are getting a full table
    scan. One of which is you used default settings; one of which is FALSE for INCREMENTAL which,
    according to the excerpt, causes full table scans which matches what you are getting.
    Conclusion? Your expectation is wrong. So now you need to check out why. The first step
    is to query to see what value of INCREMENTAL is being used.
    You also need to check what value of GRANULARITY is being used.
    And you say this
    >
    Or that I really don't know how to gather stats on partitioned tables and it's all my fault - in which case - please let me know - and don't hold back!!!
    I'd rather find a solution than save my reputation!!
    >
    Yet when I provide an excerpt that seems to match your issue you cop an attitude.
    I gave you a few pointers of where to look next and you fault us for not knowing the default
    values for all parameters for all versions of Oracle for all OSs.
    How disingenous is that?

  • Identifying child jobs in tables

    I am trying to put together a query for the JobMst and JobRun tables that needs to identify the child steps which are the ones that are actually running our automated processes. We have jobs set up where the child jobs are from 1 to 4 levels beneath the top parent job, such as these examples:
    Parent
         Child
    up to
    Parent
         Parent
              Parent
                   Parent
                        Child
    Would I use the JobMst_ID field in the JobMst table to identify the child process? If so, how would its value differ from the parent jobs? Would this be the same for both of these examples, or would this value change depending upon the depth of the child job?
    Is there any documentation in regard to the JobMst and JobRun tables (and other common tables as well) that is available that would cover this information? Any help with this will be appreciated.

    Hi Mark,
    Some of the jobs created via EM are created and managed through the EM-specific Scheduler and are not manageable directly from the database (jobs used for backup often fall into this category).
    If the job does not show up in dba_jobs or dba_scheduler_jobs then this is probably one of those jobs.
    -Ravi

  • MB5B Report table for Open and Closing stock on date wise

    Hi Frds,
    I am trying get values of Open and Closing stock on date wise form the Table MARD and MBEW -Material Valuation but it does not match with MB5B reports,
    Could anyone suggest correct table to fetch the values Open and Closing stock on date wise for MB5B reports.
    Thanks
    Mohan M

    Hi,
    Please check the below links...
    Query for Opening And  Closing Stock
    Inventory Opening and Closing Stock
    open stock and closing stock
    Kuber

  • Error while dropping a table

    Hi All,
    i got an error while dropping a table which is
    ORA-00600: internal error code, arguments: [kghstack_free1], [kntgmvm: collst], [], [], [], [], [], [], [], [], [], []
    i know learnt that -600 error is related to dba. now how to proceed.
    thanks and regards,
    sri ram.

    00600 errors should be raised as service request with Oracle as it implies some internal bug.
    You can search oracle support first to see if anyone has had the same class of 00600 error, and then if not (and therefore no patch) raise your issue with Oracle.
    http://support.oracle.com

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Rendering xml-table into logical filename in SAP R/3

    Hi,
    I am trying to translate an xml-table with bytes into a logical filepath in SAP R3.
    Do I have to use the method gui-download or shall I loop the internal xml-table?
    When I tried to loop the xml-table into a structure, and then transfering the structure into the logical filename, I get problems with the line breaks in my xml-file. How do I get the lines to break exactly the same as I wrote them in my ABAP-code?
    Edited by: Kristina Hellberg on Jan 10, 2008 4:24 PM

    I believe you posted in the wrong forum.
    This forum is dedicated to development and deployment of .Net applications that connect and interact with BusinessObjects Enterprise, BusinessObjects Edge, or Crystal Reports Server. This includes the development of applications using the BusinessObjects Enterprise, Report Application Server, Report Engine, and Web Services SDKs.
    Ludek

  • Can you check for data in one table or another but not both in one query?

    I have a situation where I need to link two tables together but the data may be in another (archive) table or different records are in both but I want the latest record from either table:
    ACCOUNT
    AccountID     Name   
    123               John Doe
    124               Jane Donaldson           
    125               Harold Douglas    
    MARKETER_ACCOUNT
    Key     AccountID     Marketer    StartDate     EndDate
    1001     123               10526          8/3/2008     9/27/2009
    1017     123               10987          9/28/2009     12/31/4712    (high date ~ which means currently with this marketer)
    1023     124               10541          12/03/2010     12/31/4712
    ARCHIVE
    Key     AccountID     Marketer    StartDate     EndDate
    1015     124               10526          8/3/2008     12/02/2010
    1033     125               10987         01/01/2011     01/31/2012  
    So my query needs to return the following:
    123     John Doe                        10526     8/3/2008     9/27/2009
    124     Jane Donaldson             10541     12/03/2010     12/31/4712     (this is the later of the two records for this account between archive and marketer_account tables)
    125     Harold Douglas               10987          01/01/2011     01/31/2012     (he is only in archive, so get this record)
    I'm unsure how to proceed in one query.  Note that I am reading in possibly multiple accounts at a time and returning a collection back to .net
    open CURSOR_ACCT
              select AccountID
              from
                     ACCOUNT A,
                     MARKETER_ACCOUNT M,
                     ARCHIVE R
               where A.AccountID = nvl((select max(M.EndDate) from Marketer_account M2
                                                    where M2.AccountID = A.AccountID),
                                                      (select max(R.EndDate) from Archive R2
                                                    where R2.AccountID = A.AccountID)
                   and upper(A.Name) like parameter || '%'
    <can you do a NVL like this?   probably not...   I want to be able to get the MAX record for that account off the MarketerACcount table OR the max record for that account off the Archive table, but not both>
    (parameter could be "DO", so I return all names starting with DO...)

    if I understand your description I would assume that for John Dow we would expect the second row from marketer_account  ("high date ~ which means currently with this marketer"). Here is a solution with analytic functions:
    drop table account;
    drop table marketer_account;
    drop table marketer_account_archive;
    create table account (
        id number
      , name varchar2(20)
    insert into account values (123, 'John Doe');
    insert into account values (124, 'Jane Donaldson');
    insert into account values (125, 'Harold Douglas');
    create table marketer_account (
        key number
      , AccountId number
      , MktKey number
      , FromDt date
      , ToDate date
    insert into marketer_account values (1001, 123, 10526, to_date('03.08.2008', 'dd.mm.yyyy'), to_date('27.09.2009', 'dd.mm.yyyy'));
    insert into marketer_account values (1017, 123, 10987, to_date('28.09.2009', 'dd.mm.yyyy'), to_date('31.12.4712', 'dd.mm.yyyy'));
    insert into marketer_account values (1023, 124, 10541, to_date('03.12.2010', 'dd.mm.yyyy'), to_date('31.12.4712', 'dd.mm.yyyy'));
    create table marketer_account_archive (
        key number
      , AccountId number
      , MktKey number
      , FromDt date
      , ToDate date
    insert into marketer_account_archive values (1015, 124, 10526, to_date('03.08.2008', 'dd.mm.yyyy'), to_date('02.12.2010', 'dd.mm.yyyy'));
    insert into marketer_account_archive values (1033, 125, 10987, to_date('01.01.2011', 'dd.mm.yyyy'), to_date('31.01.2012', 'dd.mm.yyyy'));
    select key, AccountId, MktKey, FromDt, ToDate
         , max(FromDt) over(partition by AccountId) max_FromDt
      from marketer_account
    union all
    select key, AccountId, MktKey, FromDt, ToDate
         , max(FromDt) over(partition by AccountId) max_FromDt
      from marketer_account_archive;
    with
    basedata as (
    select key, AccountId, MktKey, FromDt, ToDate
      from marketer_account
    union all
    select key, AccountId, MktKey, FromDt, ToDate
      from marketer_account_archive
    basedata_with_max_intervals as (
    select key, AccountId, MktKey, FromDt, ToDate
         , row_number() over(partition by AccountId order by FromDt desc) FromDt_Rank
      from basedata
    filtered_basedata as (
    select key, AccountId, MktKey, FromDt, ToDate from basedata_with_max_intervals where FromDt_Rank = 1
    select a.id
         , a.name
         , b.MktKey
         , b.FromDt
         , b.ToDate
      from account a
      join filtered_basedata b
        on (a.id = b.AccountId)
    ID NAME                     MKTKEY FROMDT     TODATE
    123 John Doe                  10987 28.09.2009 31.12.4712
    124 Jane Donaldson            10541 03.12.2010 31.12.4712
    125 Harold Douglas            10987 01.01.2011 31.01.2012
    If your tables are big it could be necessary to do the filtering (according to your condition) in an early step (the first CTE).
    Regards
    Martin

  • Can not insert/update data from table which is created from view

    Hi all
    I'm using Oracle database 11g
    I've created table from view as the following command:
    Create table table_new as select * from View_Old
    I can insert/update data into table_new by command line.
    But I can not Insert/update data of table_new by SI Oject Browser tool or Oracle SQL Developer tool .(read only)
    Anybody tell me, what's happend? cause?
    Thankyou
    thiensu
    Edited by: user8248216 on May 5, 2011 8:54 PM
    Edited by: user8248216 on May 5, 2011 8:55 PM

    I can insert/update data into table_new by command line.
    But I can not Insert/update data of table_new by SI Oject Browser tool or Oracle SQL Developer tool .(read only)so what is wrong with the GUI tools & why posting to DATABASE forum when that works OK?

  • Not able to refresh the data in a table

    Hi In my application i fill data in a table on clikc of a button ..
    Following are the line of code i have user
    RichColumn richcol = (RichColumn)userTableData.getChildren().get(0);                                           (fetch the first column of my table)
    richcol.getChildren().add(reportedByLabel);                                                                   (reportedByLabel is a object of RichInputText)
    AdfFacesContext adfContext1 = AdfFacesContext.getCurrentInstance();
    adfContext1.addPartialTarget(richcol);
    adfContext1.addPartialTarget(userTableData);
    But on submit of that button table data is not refreshed after adding partial trigger on that table as well as that column also .... any idea??
    Edited by: Shubhangi m on Jan 27, 2011 3:50 AM

    Hi,
    The Code that you have shown adds an additional inputText component to the first column of a table.
    Is that your intention?
    If yes, please use the following code snippet to achieve your functionality:
    JSPX Code:
    <af:form id="f1">
    <af:commandButton text="Add Column" id="cb1"
    actionListener="#{EmployeesTableBean.onAddColumn}"/>
    <af:table value="#{bindings.Employees.collectionModel}" var="row"
    rows="#{bindings.Employees.rangeSize}"
    emptyText="#{bindings.Employees.viewable ? 'No data to display.' : 'Access Denied.'}"
    fetchSize="#{bindings.Employees.rangeSize}"
    rowBandingInterval="0"
    selectedRowKeys="#{bindings.Employees.collectionModel.selectedRow}"
    selectionListener="#{bindings.Employees.collectionModel.makeCurrent}"
    rowSelection="single" id="t1"
    binding="#{EmployeesTableBean.table}">
    <af:column sortProperty="EmployeeId" sortable="false"
    headerText="#{bindings.Employees.hints.EmployeeId.label}"
    id="c1">
    <af:outputText value="#{row.EmployeeId}" id="ot2">
    <af:convertNumber groupingUsed="false"
    pattern="#{bindings.Employees.hints.EmployeeId.format}"/>
    </af:outputText>
    </af:column>
    <af:column sortProperty="FirstName" sortable="false"
    headerText="#{bindings.Employees.hints.FirstName.label}"
    id="c2">
    <af:outputText value="#{row.FirstName}" id="ot1"/>
    </af:column>
    <af:column sortProperty="LastName" sortable="false"
    headerText="#{bindings.Employees.hints.LastName.label}"
    id="c3">
    <af:outputText value="#{row.LastName}" id="ot3"/>
    </af:column>
    </af:table>
    </af:form>
    Bean:
    public class EmployeesTableBean {
    private RichTable table;
    public EmployeesTableBean() {
    public void setTable(RichTable table) {
    this.table = table;
    public RichTable getTable() {
    return table;
    public void onAddColumn(ActionEvent actionEvent) {
    RichInputText newRichInputText = new RichInputText();
    newRichInputText.setId("new");
    newRichInputText.setValue("Name:");
    RichColumn richcol = (RichColumn)table.getChildren().get(0);
    richcol.getChildren().add(newRichInputText);
    AdfFacesContext adfContext1 = AdfFacesContext.getCurrentInstance();
    adfContext1.addPartialTarget(table);
    Thanks,
    Navaneeth

  • Unable to capture the adf table column sort icons using open script tool

    Hi All,
    I am new to OATS and I am trying to create script for testing ADF application using open script tool. I face issues in recording two events.
    1. I am unable to record the event of clicking adf table column sort icons that exist on the column header. I tried to use the capture tool, but that couldn't help me.
    2. The second issue is I am unable to capture the panel header text. The component can be identified but I was not able to identify the supporting attribute for the header text.

    Hi keerthi,
    1. I have pasted the code for the first issue
    web
                             .button(
                                       122,
                                       "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1824fhkchs_6']/web:form[@id='pt1:_UISform1' or @name='pt1:_UISform1' or @index='0']/web:button[@id='pt1:MA:0:n1:1:pt1:qryId1::search' or @value='Search' or @index='3']")
                             .click();
                        adf
                        .table(
                                  "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1c9nk1ryzv_6']/web:ADFTable[@absoluteLocator='pt1:MA:n1:pt1:pnlcltn:resId1']")
                        .columnSort("Ascending", "Name" );
         }

Maybe you are looking for

  • One of the USB ports on MacBook Pro does not work.

    Hi, I purchased a MacBook Pro in Jan '11. Both the USB ports on the laptop were working fine. Recently, I have noticed, any external device connected to the USB port, towards the battery side does not recognize the connection. Any solutions ?. Is thi

  • Need to add special characters in sap scripts

    Hi All,    I need to add special characters in sap script. I have used them as it is my script,but in the print preview i am able to see '#' instead of those special characters.Can anybody please tell me the way in which we can add special characters

  • Can I use Mackeeper to remove duplicate address book entries?

    On my MacBook using OS 10.7.2 and iTunes 10.5.1, after syncing with iPad2 and iPod Touch I now have 3 and 4 copies of most Address Book entries ON MY MAC BOOK. Can I use Mackeeper to remove duplicates? No option to remove duplicates ever appeared dur

  • Messaging Problem with 3.0 Please Help

    I upgraded to 3.0 and now 3GS. Prior to this my wife was able to text me from her verizon blackberry, no issues and it came in on the iphone messaging. Now, somehow the iphone is changing it into an "email". The text appears as her [email protected]

  • File Upload JSP Portlet

    Hi, I want to upload files in a JSP portlet. I have java servlet and I can upload files successfully out of portal. How can I use these servlet in a JSP portlet to upload file? Thanks Ali Erkan