Gather stats on objact usage !!

I want to determine unused user/schema objects on a database.
I mean developers do not clean their unsed data (tables, schema objects,etc.) regularly and this causes disk space shortage and i believe that the database should hold clear and used data.
In order to catch these type of unused database objects we prepare reports to developers for all oracle user/schema objects and we want them to erase unused ones.
But there must be a easier way to do this. there should be easier way for DBAs to catch these types of database objects.
In short, how can i gather/find statistics/information on
a) oracle users which are not acceessed (used)
b) oracle database objects which are not acceessed (used)
Regards,

Hi,
Well, you have various level of auditing, and of course, the more you want to log, the more load it will bring on your system.
In the administration tab of DB console (or Grid Control) you will find a link for Audit Settings. There you will be able to define what you want to log. Start cautious and do not be too greedy (e.g. by Access vs. by Session granuality).
After enabling auditing, it will populate the sys.aud$ table.
Regards,
Thierry
Edited by: Urgent-IT on Dec 24, 2010 11:10 AM

Similar Messages

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • How to gather stats on the target table

    Hi
    I am using OWB 10gR2.
    I have created a mapping with a single target table.
    I have checked the mapping configuration 'Analyze Table Statements'.
    I have set target table property 'Statistics Collection' to 'MONITORING'.
    My requirement is to gather stats on the target table, after the target table is loaded/updated.
    According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
    Analyze Table Statements
    If you select this option, Warehouse Builder generates code for analyzing the target
    table after the target is loaded, if the resulting target table is double or half its original
    size.
    My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
    I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
    Thanks for your help in advance...
    ~Salil

    Hi
    Unfortunately we have to disable automatic stat gather on the 10g database.
    My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
    So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
    Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
    If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
    Any ideas to overcome this issue is appreciated.
    Thanks
    Salil

  • Performance tuning - Gather stats and statID created

    SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE('HR', 'SAVED_STATS');
    SQL> SELECT TABLE_NAME, NUM_ROWS, BLOCKS,EMPTY_BLOCKS, AVG_SPACE, USER_STATS, GLOBAL_STATS
    2 FROM USER_TABLES
    3 WHERE TABLE_NAME = 'MYCOUNTRIES';
    TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE USE
    GLO
    MYCOUNTRIES 0 0 0 0 NO
    YES
    SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>'HR',TABNAME=>'MYCOUNTRIES',ESTIMATE_PERCENT=>10,STATOWN=>'HR',STATTAB=>'SAVED_STATS',STATID=>'PREVIOUS1');
    TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE USE GLO
    MYCOUNTRIES 25 5 0 0 NO YES
    SQL> select statid, type, count(*)
    2 from saved_stats
    3 group by statid, type;
    STATID T COUNT(*)
    PREVIOUS1 C 3
    PREVIOUS1 T 1
    Qn) Are the statistics which are stored based on statid ? ie. everytime I re-gather stats should I be mentioning a seperate statID name, so that a new version of stats is created/stored in a seperate statID.

    Are the statistics which are stored based on statid ? ie. everytime I re-gather stats should I be mentioning a >seperate statID name, so that a new version of stats is created/stored in a seperate statID.Yes.
    From http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461
    statid Identifier (optional) to associate with these statistics within stattab

  • Gather stats on every table in a schema

    Hi,
    i have an CRM application running on 10g R2 db. it has 5000 tbls on which less than 10% of tables are dynamic. gather stats job runs every day at 2am successfully.
    i was monitoring the statistics(dba_tables, dba_tab_modifications, dba_tab_statistics), noticed that only 28 tables r been update with latest stats every day for CRM schema and most of these tables are same. during query tunning i found that some tables has stale stats, but it does't figure in column stale of dba_tab_statistics, but it shows no of rows inserted, updated in tab_modifications.
    my question is there any draw back in gathering stats for all the tables every day irrespective of data is loaded with 10% or not and but not for tables with no rows..

    thanks for the quick response, it was helpful.
    due to application vendor recommendations, for some tables stats were disabled and optimizer parameter were changed which causes dynamic sample not using dynamic stats gather for some queries as they use the tables with no stats. as per documentation it would be calculating the stats on fly when the query the tables which stats has not been updated.
    as of now i am not gathering stats manually for this schema, as auto is scheduled. and will verify if indeed on 10% of data is loaded it updates the stats or not then i may manually gather stats for only those tables.

  • Auto task - Gather Stats.How do Oracle knows which DB objects to analyze ?

    Hi,
    Do oracle has any specific criteria while identifying which Oracle DB objects needs to be analyze as part of Gather Stats ( included as "auto task") ?
    Does it uses information from DBA_TAB_MODIFICATIONS to find any fixed % of change like 10% or so or it analyzes all objects ?

    Copied and pasted from the documentation, which you can find via some simple Google searches if you don't know about http://docs.oracle.com
    GATHER AUTO: Gathers all necessary statistics automatically. Oracle implicitly determines which objects need new statistics, and determines how to gather those statistics. When GATHER AUTO is specified, the only additional valid parameters are stattab,statid, objlist and statown; all other parameter settings are ignored. Returns a list of processed objects.
    GATHER STALE: Gathers statistics on stale objects as determined by looking at the *_tab_modifications views. Also, return a list of objects found to be stale.
    GATHER EMPTY: Gathers statistics on objects which currently have no statistics. Return a list of objects found to have no statistics.
    14.2.1 GATHER_STATS_JOB
    Optimizer statistics are automatically gathered with the job GATHER_STATS_JOB. This job gathers statistics on all objects in the database which have:
    Missing statistics
    Stale statistics

  • Refreshing mview is hanging after a database level gather stats

    hi guys,
    can you please help me identify the root cause of this issue.
    the scenario is this:
    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    we already identified during testing that the scenario where the refresh mview is failing is when after we are gathering stats in a database level.
    during gather stats in a schema level, refresh mview is successful.
    can you please help me understand why we are failing refreshing mview after we gather stats in the database level??
    we are using oracle 9i
    the creation of the mview goes something like below:
    create materialized view hanging_mview
    build deferred
    refresh on demand
    query rewrite disabled
    appreciate all your help.
    thanks a lot in advance.

    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    You know Tuesday's MV refresh "hangs".
    You don't know why it does not complete.
    You desire solution so that it does complete.
    You don't really know what is is doing on Tuesdays, but hope automagical solution will be offered here.
    The ONLY way I know how to possibly get some clues is SQL_TRACE.
    Only after knowing where time is being spent will you have a chance to take corrective action.
    The ball is in your court.
    Enjoy your mystery!

  • Gather Stats on Newly Partitioned Table

    I partitioned an existing table containing 92 million rows. The method was using dbms_redefinition, whereby I started the redef and then added the indexes and constraints last. After partitioning, I did not gather stats on any of the partitions that were created and I did not analyze any of the indexes. Then I loaded an additional 4 million records into on of the partitions of the newly partitioned table. I ran dbms gather stats on this particular partition and it took over 15 hours. Normally it only takes 4 hours to run dbms gather stats on the individual partitions, so I stopped it after 15 hours. When I monitored it while it was running, it looked like it was taking a really long time gathering stats on the indexes. Is this normal for a newly partitioned table? Is there something I can to prevent it from taking so long when I run gather stats? Oracle Version 10.2.0.4

    -- Gather PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_partition_name, estimate_percent => 20, cascade=> FALSE,granularity => 'PARTITION');
    -- Gather GLOBAL INDEX Statistics
    for i in (select * from sys.dba_indexes where table_owner = upper(v_table_owner)
    and table_name = upper(v_table_name) and partitioned = 'NO'
    order by index_name)
    loop
    SYS.DBMS_STATS.gather_index_stats(ownname => upper(v_table_owner), indname => i.index_name,
    estimate_percent => 20, degree => NULL);
    end loop;
    -- Gather SUB-PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_subpartition_name, estimate_percent => 20, cascade=> TRUE,granularity => 'ALL');

  • DBMS Gather Stats using ESTIMATE, gives varying number of rows

    I am a little intrigued by the gather_table_stats results for number of rows on user_tables.
    A table with no maintenance on it, has 242115 rows.
    When I gather stats COMPUTE, num_rows from user_tables = 242115.
    However, when I ESTIMATE the figure changes, without apparent reason:
    10% - 240710
    25% - 240564
    50% - 242904
    99% - 242136
    Using ESTAIMTE, I would expect the number of rows that are inspected to change, but not the resulting number of rows on User_tables!
    I wonder, why is this?
    Thanks

    Thank you for that amusing analogy!
    However, it would be interesting to know where it
    gets this idea from. Why does it decide sometimes
    more, sometimes less, what basis?Actually I'm not the person that knows the precise algorithm, I don't know also any links to docs handy that describes how it is done. But one scenario would be to enble trace and check what sql oracle is issuing to gather stats. Of course it won't be all the algorithm, but probably you'll get some insight.
    I mean, If I knew I had not removed or added any
    socks to the wardrobe or even if I was unsure anyone
    else had, I would use the previous count as my
    starting point.AFAIK Oracle doesn't have any previous knowledge i.e. to be more precise Oracle doesn't use it. Because you as a person probably know something more how the table was or wasn't changed, but Oracle doesn't know and/or use such information at least for stats gathering.
    Gints Plivna
    http://www.gplivna.eu

  • How to check whether gather stats job is running or not in OEM

    Hi,
    People in our team are saying that there is an automatic job is running in OEM to gather the statistics of the tables. Also it decides which table needs to be gather stats.
    I have not much idea in OEM (Oracle 10g), please let me know how to check the job which is gathering the statistics of tables and where to check that job.
    Thanks in advance,
    Mahi

    You may query dba_scheduler_job_log like
    SQL> select JOB_NAME,LOG_DATE,STATUS from dba_scheduler_job_log;There you should see the GATHER_STATS_JOB and its runnings.

  • Can we change GATHER_STATS_PROG (auto gather stat job) behavior

    Hi All,
    Is anyway we can change default oracle auto gather job ?
    We know that oracle gather job (GATHER_STATS_JOB, below 11g) will trigger base on the Maintenance Window, we have few huge Production environment.
    We're thinking to gather application schema stat by using our own scripts, thus would like to know is anyway we can still enable oracle auto gather stat
    but exclude application schema?
    Thanks in advance.
    Regards,
    Klnghau

    Hi All,
    Forgot to mention my oracle version is 10.2.0.4, and I think i find the solution from My Oracle support,
    Oracle10g: New DBMS_STATS parameter AUTOSTATS_TARGET [ID 276358.1]
    This is a new parameter in Oracle10g for the DBMS_STATS package.
    According to the documentation for this package in file dbmsstat.sql
    (under ORACLE_HOME/rdbms/admin):
    This parameter is applicable only for auto stats collection.
    The value of this parameter controls the objects considered for stats collection.
    It takes the following values:
    'ALL' -- statistics collected for all objects in system
    'ORACLE' -- statistics collected for all oracle owned objects
    'AUTO' -- oracle decides for which objects to collect stats
    Default is AUTO, i think i can chenge to ORACLE in my environment.
    Thanks

  • Gather stats:Time to complete?

    Experts..
    Could you please let me know how to calculate time complete gathering stats(Analyze) for huge number of tables.
    I would like to calulate the time to finish the analyze in database.
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    PL/SQL Release 9.2.0.6.0 - Production
    CORE     9.2.0.6.0     Production
    TNS for Solaris: Version 9.2.0.6.0 - Production
    NLSRTL Version 9.2.0.6.0 - Production
    Thanks

    hi,
    create a table
    create table analyze_time(task varchar2(20), time date);create a procedure for stats gathering
    create or replace procedure gather_st is
    begin
    insert into analyze_time values('start',sysdate);
    commit;
    --write dbms_stats procedure here to gather stats
    insert into analyze_time values('end',sysdate);
    commit;
    end;
    /execute this procedure to gather stats and after that query analyze_stats to check what time it started and what time it finished.
    In SQLPLUS, you can use "set timing on" and then gather stats, when it will finish, prompt will show you the time it took to complete the operaion.
    Salman

  • Gather Schema Statistics - GATHER AUTO option failing to gather stats

    Hi ,
    We recently upgraded to 10g DB and 11.5.10 version of Oracle EBS. I want to employ GATHER AUTO option while running Gather Schema Statistics.
    To test the working, I created a test table with 1 million rows. Then, stats were gathered for this table alone by using Gather Table Stats. Now, I deleted ~12% of rows & issued commit. The table all_tab_statistics shows that the table has stale statistics (stale stats column = YES). After that I ran Gather Schema Stats for that particular schema. But the request did not pick the test table to be gathered.
    What is the criterion on which Oracle chooses which all tables to be gather statistics for under Gather Auto option? I am aware of the 10% change in data, but how is this 10% calculated? Is it only based on (insert + update + delete)?
    Also, what is the difference between Gather Auto and Gather Stale ?
    Any help is appreciated.
    Thanks,
    Jithin

    Randalf,
    FYI.. this is what happens inside the concurrent progarm call.. there are a few additional parameters for output/ error msgs:
    procedure GATHER_SCHEMA_STATS(errbuf out varchar2,
    retcode out varchar2,
    schemaname in varchar2,
    estimate_percent in number,
    degree in number ,
    internal_flag in varchar2,
    request_id in number,
    hmode in varchar2 default 'LASTRUN',
    options in varchar2 default 'GATHER',
    modpercent in number default 10,
    invalidate in varchar2 default 'Y'
    is
    exist_insufficient exception;
    bad_input exception;
    pragma exception_init(exist_insufficient,-20000);
    pragma exception_init(bad_input,-20001);
    l_message varchar2(1000);
    Error_counter number := 0;
    Errors Error_Out;
    -- num_request_id number(15);
    conc_request_id number(15);
    degree_parallel number(2);
    begin
    -- Set the package body variable.
    stathist := hmode;
    -- check first if degree is null
    if degree is null then
    degree_parallel:=def_degree;
    else
    degree_parallel := degree;
    end if;
    l_message := 'In GATHER_SCHEMA_STATS , schema_name= '|| schemaname
    || ' percent= '|| to_char(estimate_percent) || ' degree = '
    || to_char(degree_parallel) || ' internal_flag= '|| internal_flag ;
    FND_FILE.put_line(FND_FILE.log,l_message);
    BEGIN
    FND_STATS.GATHER_SCHEMA_STATS(schemaname, estimate_percent,
    degree_parallel, internal_flag, Errors, request_id,stathist,
    options,modpercent,invalidate);
    exception
    when exist_insufficient then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    when bad_input then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    when others then
    errbuf := sqlerrm ;
    retcode := '2';
    l_message := errbuf;
    FND_FILE.put_line(FND_FILE.log,l_message);
    raise;
    END;
    FOR i in 0..MAX_ERRORS_PRINTED LOOP
    exit when Errors(i) is null;
    Error_counter:=i+1;
    FND_FILE.put_line(FND_FILE.log,'Error #'||Error_counter||
    ': '||Errors(i));
    -- added to send back status to concurrent program manager bug 2625022
    errbuf := sqlerrm ;
    retcode := '2';
    END LOOP;
    end;

  • Gather stats after shrink command

    Hello.
    Running Oracle 11.2 and was preparing to run shrink command on a major table to release 5G of wasted space on a 15G segment.
    I have already done this in our test database and was preparing to do this in production, when one web site I went to said we should run fresh stats after the shrink operation.
    Does this make sense, and does it seem to be necessary?
    It does seem to make sense that the values for number of blocks, number of empty blocks would be different after the shrink operation.
    But I am surprised I did not see this recommendation on any other site except the one site.

    Well, sure... if we want to consider this a DML or DDL operation, but actually, we are not changing or defining new data structure, and we are not manipulating the data (per se).
    But, I'm in agreement that it makes sense to gather fresh stats just based on the difference of blocks and empty blocks which we can assume the optimizer considers when choosing an execution plan.

  • Gather Stats Job in 11g

    Hi,
    I am using 11.1.0.7 on IBMAIX Power based 64 bit system.
    In 10g, if i query dba_scheduler_jobs view, i see the GATHER_STATS_JOB for automated statistics collection but in 11g i don't see this rather i see BSLN_MAINTAIN_STATS_JOB job which executes BSLN_MAINTAIN_STATS_PROG program for stats collection.
    And if i query DBA_SCHEDULER_PROGRAMS, i also see GATHER_STATS_PROG program here. Can gurus help me understanding both in 11g. Why there are two different programs and what is the difference?
    Actually the problem is that i am receiving following error message in my alert log file
    Mon Aug 16 22:01:42 2010
    GATHER_STATS_JOB encountered errors.  Check the trace file.
    Errors in file /oracle/diag/rdbms/usgdwdbp/usgdwdbp/trace/usgdwdbp_j000_1179854.trc:
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expiredThe trace files shows
    *** 2010-08-14 22:10:14.449
    *** SESSION ID:(2028.20611) 2010-08-14 22:10:14.449
    *** CLIENT ID:() 2010-08-14 22:10:14.449
    *** SERVICE NAME:(SYS$USERS) 2010-08-14 22:10:14.449
    *** MODULE NAME:(DBMS_SCHEDULER) 2010-08-14 22:10:14.449
    *** ACTION NAME:(ORA$AT_OS_OPT_SY_3407) 2010-08-14 22:10:14.449
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    *** 2010-08-14 22:10:14.450
    GATHER_STATS_JOB: GATHER_TABLE_STATS('"DWDB_ADMIN_SYN"','"TEMP_HIST_HEADER_LIVE"','""', ...)
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expiredBut we dont have GATHER_STATS_JOB in 11g and also the job BSLN_MAINTAIN_STATS_PROG runs only on weekend and the above error message came last night 16 August.
    Thanks
    Salman

    Thanks for the people who are contributing.
    I know from the information that table is locked, but i have tried manually locking a table and executing gather_table_stats procedure but it runs fine. Here i have two questions
    Where is GATHER_STATS_JOB in 11g as you can see the text of trace file where it says that GATHER_STATS_JOB failed, i dont see there is any GATHER_STATS_JOB in 11g.
    BSLN_MAINTAIN_STATS_JOB job is supposed gather statistics but only on weekend nights, then how come i see this error occurring last night at 22:11 on 16th August which is not a week end night.
    Salman

Maybe you are looking for

  • How do I install the operating system Windows 7 on a new hard drive?

    Good morning ¿ how I can install windows 7 home premium on a new hard drive?, Disk damage has my laptop windows 7 home premium 32 bit and I have to buy a new hard drive, but not as I have already installed the new because I do not have installation C

  • Wireless Music doesn't work with new software

    Hello- I can't get my wireless music to work after I update the software to the latest version. The only one that will work is when I install it from the CD. But, I would like to use the most recent version of the software. As soon as I update, the M

  • Agent Not Found for rule 168 in Travel Management WF WS20000040

    Hi, I got a strange error occurring that I can not quite figure out. I am using a copy of a standard delivered SAP Work flow for Trip Approval (WS20000040) that is identical to the SAP delivered workflow except without the expense department tasks (w

  • Will it cost anything to port number from virgin to a vzw family plan?

    I looked and couldn't see this (slightly convoluted) situation covered anywhere. I was wondering if you could help. I use one line of a 5-phone family plan (I'm not the account owner). I have a sister who is interested in taking over the line but wou

  • Something going on with Azure Website and Cloud Service?

    Starting this morning, my website (WordPress) could not even load with Shared Plan. When I switch it to Basic plan it is running extremely slow (25s to load a page). This is a very low traffic site (100 page views/day). Even when there is only 1 conn