Mail rules are not applied to IMAP subfolders

Hi
I've set up a mail rule to run for 'Every Message' which runs an applescript of mine.
The issue I have is that the script is only called for mail in the root of IMAP accounts and not mail in subfolders.
I've searched google and the forums but cannot find any explanation. Can anybody here help?
Thanks,
Jason

I should point out that the new mails in the sub folders were put there by a sieve script on the server.

Similar Messages

  • Mail Rules Are Not Being Saved After Mail Is Close & Opended

    I am using Mail 3.6 with 10.5.8 on a Macbook Pro. I have set up mail rules so that an email from a specific person's email address goes to two places: 1) Move to a folder on .Mac & 2) the Inbox on the Mail app. Upon first setting this up it works. BUT when closing and opening the program or waking up the computer from a sleep status the rule disappears and the Rule Preference window shows that email message is now going only to the .Mac folder & none to the Inbox!
    I spent a lot of time with Tech Support via telephone and even ended up reinstalling (A&I) OS 10.5 (plus updates); this was done on the tech's theory that Mail was corrupt on the root level. HOWEVER, that was not the solution as the problem still exists with the reinstalled OS.
    Can someone from Apple's tech support who has knowledge of this issue please help?
    Thanks,
    Bob

    Since replies to questions and comments in these Discussions
    is mostly from users who are registered to post & reply here,
    you may find better result in a call into AppleCare support and
    if there is an incident or case number involved in the issue you
    have experienced with corrective actions taken so far in your
    communications with Tech Support(?) by phone, be certain a
    reference is made to that conversation, for proper follow-ups.
    +{You could refer someone you talk to there, to your post in here.}+
    There may be some other issue, perhaps relating to hardware
    or something else, that may bear looking into. Have you seen
    a result of the system log files and reports, such as Console
    generates from the computer in action? There may be a clue
    in those almost cryptic reports. Also, you have a hardware test
    on bootable partition in the original system install disc; that's a
    specific test version for your computer, an Apple Hardware Test.
    Boot instructions are on the disc's label. Run it in a repeat-loop.
    Is there any third-party software installed later on, which could
    result in complications or conflicts with the Mac's Mail app?
    The problem, if not software, could be hardware. Or a logic error.
    If you have AppleCare coverage in effect on the computer, that
    is the call to make; and keep track of your incident/case number.
    Good luck & happy computing!

  • AAD Sync - updates to attribute and partition filter rules are not applied

    The first Attribute filter rules and/or Directory Partition filters we add with a new AAD Sync Installation work fine immediately.
    Any subsequent changes to the rules / new rules / removed rules / updated partition filters don't have an impact on the filtering behavior until we
    reboot the AAD Sync host.
    We've tried restarting the sync service, all sorts of Full Sync etc., nothing else helps.
    We don't have any duplicate rules (have those regularly due to the known bug, but always remove them).
    We're running AAD Sync build 1.0.0485.0222.
    Thanks for any suggestions.

    Thanks for your swift suggestions. We've set up a new environment now which finally works fine. We have the same issue at the customer's site where we'll verify next week (100k user accounts, filters are crucial there).
    Problem seems to have been the Full Import that we didn't do or didn't do in the right place (can no longer verify since the original test environment is now gone). We were probably doing Full Synchronization instead (as suggested in
    the documentation on MSDN).
    Still interesting that the reboots had helped every time - they don't imply a Full Import just by themselves I assume?
    Thanks again,
    René

  • Rules are not working correctly in OCS 10.1.2

    Hi,
    A few days ago we have encountered a problem in which the BCC rules or any other rules are not processed out of the user's mailbox.
    For example, a user goes to Oracle Mail (the web access, but not WAC), goes to Filters and creates a new rule for dealing with incoming mail (delivered or read, doesn't matter). The rule is processed ONLY if the the user asked to move/copy the mail to a subfolder IN HIS MAILBOX.
    If the user asks to forward ("Send a blind copy to") the delivered mails to a different mailbox (e.g. in OCS or to externally to gmail), the mail is NOT processed by the filter.
    Other than that mails are treated normally, incoming and outgoing. Manually forwarding to other OCS users also works, as well as forwarding to external mail systems such as gmail. Only the 'automatic' forward rule does not work.
    We have checked that the rules/filters are created with " oesrl -p" - and they are created.
    How can we troubleshoot this issue? Has anyone encountered it?
    System details:
    OCS 10.1.2
    Platform: Red Hat AS 4
    DB version: 10.1.0.4.2
    Thanks,
    -- Itay.

    Update:
    Problem resolved. restarted the SMTP-out service.

  • Selection criteria are not applied to summary fields on group footers.

    I wonder if anyone can help me with this problem.  I am using Crystal reports version 11.2, and my data source is a Sql Server view.
    The records on the view have a date field, and I have selected all records within a given date range in "Selection Formulas".
    The records are then grouped, and the Crystal summary facility used to summarise number fields on the group footers.
    So for example, if my view contains four records, one with field "amount" = 2, one with field "amount" = 8, one with field "amount" = 6, one with field "amount" = 3, but only the first two records are within the valid date range, you would expect to see the first two records listed out at detail level, then field "amount" summarised at group level, with a summarised value of 10.
    ie ....                record1                      2
                           record2                      8   
                           group level total         10
    This works fine when I run the report using Crystal's "print preview" facility.  However, when the report is run from within an application written in C#.NET, the selection criteria are not applied to the summary field, so you get ..
                           record1                      2
                           record2                      8   
                           group level total          19
    I tried putting the date selection criteria at both record and group level, but that did not work.
    I googled the problem and found an article explaining that Crystal first performs the record-level selection, then it creates the groups and totals up any summary fields, and only then does it apply the group-level selection criteria, which can lead to problems like the one I have described above.  However, since I have put my date selection criteria at both record and group level, I do not understand why I still get the problem.
    In one report I got round this problem by creating a formula that returned zero if the record date was outside of the valid date range, and returned the number field to be summarised if the date was valid, then summarising that formula, instead of summarising the number field directly.
    In other reports I created one formula to set a shared variable as zero, then another formula to accumulate it at detail record level, then another formula to display the variable at the group footer.  In other words, I did not bother with the Crystal summary facility at all, but created my own summary facility.
    While googling the problem to see what other people did in this situation, I noticed that most fixes used variations of the "shared variables and formulae" fix to get round the problem.
    The problem is that I have lots of complex reports and it will take ages to replace the summarised fields with shared variables and formulae.  The reports were initially tested with "Print Preview" so we did not notice this problem until the C#.Net application was ready to use them.  And I can't believe that you are simply meant to ignore the summary facility and re-invent the wheel by doing it all manually.
    Please tell me that there is something simple that I have been doing wrong!!!  If I have not given enough information for you to answer, please let me know.
    Thanks,
    Anne-Marie

    Hi, Anne-Marie;
    You may be running into a common issue that is docuemented here:
    [SelectionFormula|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_erq/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233303334333833393335%7D.do]
    Regards,
    Jonathan
    Edited by: Jonathan Parminter on Mar 16, 2009 8:03 AM

  • Depreciation posting rules are not define for area 00 in company code

    Dear Expert,
    Would like to seek for help, when I run a customize report (AUC without PO not yet capitalized report the system prompt the error depreciation posting rules are not define for area 00 in company code only on the march 2011 on ward but before march 2011 the report is able to display as usual.
    Between., does SAP have the standard report to list the asset which is AUC and not yet capitalized?
    My version of SAP is ECC 6.0.
    Kindly advise.
    Regards,
    Karen

    Hi,
    Thank you for the prompt reply, why the system prompt the error depreciation posting rules are not define for area 00 in company code only on the march 2011 on ward but before march 2011 the report is able to display as usual?
    Please help.
    Regards,
    Karen

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Changes are not applying

    I have made a couple of changes to a movie and they are not applying in the canvas.
    EG - I have changed the colour of some text that appears. The preview icon on the clip in the timeline has changed the colour of the text, the preview in the viewer has changed, but the canvas hasnt. Also it isnt asking me to render which it usually would.

    You can quickly force a render by lowering, then raising the opacity of the clip. There are also more involved/complex ways to replace a render.
    -DH

  • In mail attachments are not deleted even though mail is deleted

    In mail attachments are not deleted even though mail is deleted

    Check on your server to see what maiul options are selected.  If your mail account is set to "download all attachments automatically", that may be an option you want to de-select.

  • My other mail boxes are not visible in mail after upgrading to Mountain Lion

    my other mail boxes are not visible in mail after upgrading to Mountain Lion

    In the mailbox panel move your cursor slowly down the right hand side . Should reveal show and hide.

  • Calcmanager rules are not working

    Hello there,
    we have existing EPM environment 11.1.2.1, We just Installed a new EPM environment 11.1.2.3. After the installation and configuration I exported Calc manager rules from 11.1.2.1 and imported them in to 11.1.2.3 environment.
    It looks like Business rules are not working there. Below are the symptoms of the problem.
    (1) Rules keeps giving random validation errors. (Same rule validates in 11.1.2.1 fails in 11.1.2.3)
    (2) If I create a new rule for my planning application (lets say Test_Rule) and check the DB table I don't see any object being created. (Select * from HSP_OBJECT where OBJECT_NAME = 'Test_Rule')
    (3) Any updates I make to the rules in 11.1.2.3 is not being saved.
    It looks like Calc Manger is having hard time talking to planning DB.
    Any help will be appreciated.
    Thanks!

    If you have upgraded from 11.1.2.1 to .3 you would need to upgrade the planning application. Assuming that you have done that, for validation, you would need to provide validation value for all your Run Time Prompts.
    This can be done in the variables tab of the Rule Designer.
    Let me know if you have issues after these steps.
    Sree Menon
    Calc Manager Team

  • Changes are not applying to the reports

    hi all
    i have moved the RPD and catalog file to UAT there i got few changes
    i did the changes in DVE and again i moved the changed reports to UAT ,but the report changes are not applied in UAT
    so i again i copied same RPD and catalog to UAT now it is working fine,
    why the changes are not applied to the reports that i have moved to UAT
    is it mandatory to move RPD and catalog to the other environment ??
    Thanks

    Hi,
    if you change even the filters and column names you cant modify only reports and place them in UAT,if you want to cnahge that way there is a tool CAF follow this link how to do that process
    http://debaatobiee.wordpress.com/category/obiee/setup-and-environment/
    If not you need to deploy RPD and catalog into UAT.
    By,
    KK

  • Redo log files are not applying to standby database

    Hi everyone!!
    I have created standby database on same server ( windows XP) and using oracle 11g . I want to synchronize my standby database with primary database . So I tried to apply redo logs from primary to standby database as follow .
    My standby database is open and Primary database is not started (instance not started) because only one database can run in Exclusive Mode as DB_NAME is same for both database.  I run the following command on the standby database.
                SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It returns "Database altered" . But when I checked the last archive log on primary database, its sequence is 189 while on standby database it is 177. That mean archived redo logs are not applied on standby database.
    The tnsnames.ora file contains entry for both service primary & standby database and same service has been used to transmit and receive redo logs.
    1. How to resolve this issue ?
    2.Is it compulsory to have Primary database open ?
    3. I have created standby  control file by using  command
              SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘D:\APP\ORACLE\ORADATA\TESTCAT\CONTROLFILE\CONTROL_STAND1.CTL‘;
    So database name in the standby control file is same as primary database name (PRIM). And hence init.ora file of standby database also contains DB_NAME = 'PRIM' parameter. I can't change it because it returns error of mismatch database name on startup. Should  I have different database name for both or existing one is correct ?
    Can anybody help me to come out from this stuck ?
    Thanks & Regards
    Tushar Lapani

    Thank you Girish. It solved  my redo apply problem. I set log_archive_dest parameter again and then I checked archive redo log sequence number. It was same for both primary and standby database. But still table on standby database is not being refresh.
    I did following scenario.
    1.  Inserted 200000 rows in emp table of Scott user on Primary database and commit changes.
    2. Then I synchronized standby database by using Alter database command. And I also verify that archive log sequence number is same for both database. It mean archived logs from primary database has been applied to standby database.
    3. But when I count number of rows in emp table of scott user on standby database, it returns only 14 rows even of redo log has been applied.
    So my question is why changes made to primary database is not reflected on standby database although redo logs has been applied ?
    Thanks

  • Rules are not getting fired - urgent

    Hello
    I am a newbie to the oracle rules engine (10.1.3) and introducing the rules engine in our architecture.
    I have started to fire a simple rule but looks like the rules are not getting fired.
    Here is my code snipped: I am setting a value in the fact if another value is null.
    ==========
    RuleDictionary rDict = RuleFactory.getRuleDictionary(RadiusConstants.RADIUS_REPOSITORY, RadiusConstants.RADIUS_DICTIONARY, RadiusConstants.DICTIONARY_VERSION);
    RuleSession ruleSession = new RuleSession();
    ruleSession.executeRuleset(rDict.dataModelRL());
    ruleSession.executeRuleset(rDict.ruleSetRL(taskName));
    RuleMessage ruleMessage = new RuleMessage();
    ruleSession.callFunctionWithArgument("assert", moduleVo);
    ruleSession.callFunction("run");
    ruleSession.callFunctionWithArgument("assert", moduleVo);
    // ruleSession.callFunctionWithArgument("assert", moduleVo);
    System.out.println(" Module Status " + moduleVo.getStatus());
    ========
    Any help is greatly appreciated. I am not sure if I am missing. I am suing rule author to create the rules.
    I am asserting the fact in the rules engine. Here is the RL file generated from the rule author.
    ==================
    ruleset MODULE_CREATE
    2.
    3. {
    4.
    5. rule ModuleId_Null
    6. {
    7. priority = 0;
    8. if
    9. (
    10. (
    11. fact gov.sec.radius.admin.service.module.vo.ModuleVO v0_ModuleVO && (
    12. (v0_ModuleVO.moduleId == null)))
    13. )
    14.
    15. {
    16. v0_ModuleVO.status = "P";
    17. assert(v0_ModuleVO);
    18. }
    19. } //end rule ModuleId_Null;
    20. }// end ruleset MODULE_CREATE
    =================
    thanks very much in advance
    Yugandhar

    You need to specify the ruleset or rulesets that should be run. If there is only one, you can do:
    ruleSesion.callFunctionWithArgument("run", taskName);
    or
    ruleSesion.callFunctionWithArgument("pushRuleset", taskName);
    ruleSession.callFunction("run");
    If there are multiple rulesets you want to run in order, then call pushRuleset in stack-order that you want their actions executed.

  • Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

Maybe you are looking for

  • Chronic Download and Software Update Problems

    For over a month now, my success rate at downloading Podcasts or using the "Software Update" has been around 5%. Software update attempts often end with the error message: +*"Connection reset by peer. The installer package has been moved to the Trash

  • Download attachment in Mail application is super slow

    Whenever I download attachment from an email in Mail application, it super slow. It's 10 times slower than when I download the same attachment from Safari. So it means not because of the internet speed. I don't know you guys have the same problem lik

  • Numeric IP in Recipient Email Address

    Hi all, I am doing some migration work to Messaging Server 2005Q4. I am wondering if it's possible to configure MS to accept recipient email addressses containing IP address rather than the domain name, such as, [email protected]? I tried but got bou

  • Variables, database, expressions in Pages

    *Making Pages a tool for techichans and researchers* Most advanced literature contains text created from specific datastructures. They could for example be 1 a database being a reference list. At the referring point you use on text, eg "Andersen 56",

  • Where can I download 10.1.1.016?

    This is not in the downloads section, although the technotes say it is - does anyone have a url?