Req: "Logged on to Primary Workstation"

Hi, I have a directive bundle assigned to some devices, that runs every 2 hours. I'd like the bundle to only run if there is a user logged in. (any user). ZCM 10.3.3.44626
There doesnt seem to be a requirement filter that matches my needs. The closest seems to be "Logged on to Primary Workstation", but the description is a bit cryptic:
"Determines whether the user is logged on to his or her primary workstation. The two conditions you can use to set the requirement are Yes and No. If you select Yes, the user must be logged on to his or her primary workstation to meet the requirement. If you select No, if no user is logged on to the workstation, the requirement is not met. However, if a user other than the primary user is logged on to the workstation, the requirement is met."
The key part being that last sentence - it seems to suggest that if ANY user is logged on, the requirement is met?
Does anyone know exactly how this filter works?

Actually, looking at it more carefully, it seems to suggest:
If I select NO as the filter option, then if any user - EXCEPT for the primary user - is logged in, the requirement is met.
This seems a little strange, but I suppose I can OR this requirement with the "primary user logged in" one, to get what I want.

Similar Messages

  • Reset Primary User and Primary Workstation

    I have a lot of PCs and Users with incorrect display of primary user and primary workstation. On each workstations with wrong user I can see in the properties of the ZCM Agent the wrong user and it is never updated.
    I already modified the settings in - Configuration - Device Management - Primary User ... and Primary Workstation. Currently I have a Calculation of - Usage (greatest login time) - Reset after 2 Days - and it is NOT blocked due to settings on a ZENworks folder.
    Where can i reset this user/workstation? In Registry or in a ZENworks Configuration file on each machine? I even have PCs which show locally "Nicht verfgbar" (not available) but show old users in ZCC.
    We are running ZCM 10.3.4 with only Win7 64bit workstations and Server W2k8r2 with an MS-SQL DB
    Klaus

    BachmannK,
    Have you used Action > Reset Primary User Calculation
    Resets the primary user calculation schedule for the selected devices.
    The next time a device refreshes its information, it reapplies the
    primary user calculation method to designate its new primary user.
    Shaun Pond

  • Use of standby redo log files in primary database

    Hi All,
    What is the exact use of setting up standby redo log files in the primary database on a data guard setup?
    any good documents?

    A standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone.
    You should plan the standby redo log configuration and create all required log groups and group members when you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.
    refer the link,and Perform the following steps to configure the standby redo log.:-
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i1225703
    If the real-time apply feature is enabled, log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.
    refer the link
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_apply.htm#i1023371

  • Data Guard Archive Log Latency Between Primary and Physical Standby

    How can I get the time it takes (latency) for the primary instance to get an archive log over to the physical standby instance and get it "archived" and "applied". I have been looking at the V$ARCHIVED_LOG view on each side but the COLUMN "COMPLETION_TIME" always shows a date "MM/DD/YY" and no timestamp. I thought the DATE datatype include data and time. Any ideas on how I can get the latency info I'm looking for?
    Thanks
    Steve

    the COLUMN "COMPLETION_TIME" always shows a date "MM/DD/YY"
    and no timestamp. Did you try using TO_CHAR ? e.g.
    to_char(completion_time,'dd/mm/yyyy hh24:mi:ss')

  • Differetn redo log names on primary db and standby db.

    we are using oracle 10.2.0.2 to run sap. we plan to use data guard.the os is suse linux 10.3.
    the archived redo log name's prefix is '1_' like '1_167109_678247864.dbf' on primary db.
    if i recover standy database mannuly with cmd 'ALTER DATABASE RECOVER automatic standby database' to fix redo log gap, the required archived redo log name's prefix is 'PRDarch1_' like 'PRDarch1_167260_678247864.dbf'.
    i checked the oracle parameter 'LOG_ARCHIVE_FORMAT' on both side were set to '%t_%s_%r.dbf'.
    my question is the different name have any effect on datagurad? Or we just can ignore it, because it doesn't matter.
    thanks in advance.

    I'm not interested in a debate so much as finding the answer. Thanks for the info. And welcome to the forum by the way.
    That said I would think B14239-05 would answer your question.
    In section 5.7.1 it shows:
    LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
    You may be correct on the default, but they only answer I can give you is I have never setup Data Guard that way. I hope this saves you some time.
    Where did you find this? (log_archive_dest_1 = "LOCATION=/oracle/PRD/oraarch/PRDarch MANDATORY REOPEN",)
    I would consider avoiding the use of MANDATORY. This attribute will cause you a world of trouble if your Standby is unreachable.
    Also if you are using a Flash Recovery Area you do not need to setup a local archiving destination.
    If you are not using Flash Recovery with Data Guard you should. Try testing a failover without it.
    Best Regards
    mseberg

  • Adding STAND BY REDO LOG in the Primary side........

    Hi All,
    I have set STANDBY_FILE_MANAGEMENT=AUTO in standby side and also LOG_FILE_NAME_CONVERT is pointing to an exiting location at OS level...
    When i added a datafile to an existing tablespace in the primary side and performed a log switch on the same . The added datafile got
    relflected in STANDBY side ...
    But when I am trying to add a standby logfile (new group) is not getting added .........
    Please help me on this...........
    For this will I need to perform the below steps .
    1-Add a standby redolog in the Primary side......
    2-then Cancel the managed recovery process from the standby side...
    3- Add the same standby redolog (same name and size) in the standby side
    4-put the standby in recovery mode ......
    Thanks

    Thanks for the reply ......
    I have gone thgrough it....but i donot find my answer .....
    My question is "Why *STANDBY_FILE_MANAGEMENT=AUTO* functionality is not adding a standby redolog in STANDBY SIDE automatically
    when i am adding a standby redolog in the PRIMARY SIDE.....
    Thanks.......

  • Adding redo logs for dataguard primary server

    Dear all:
    i have physical dataguard servers
    i want to add new redologs for primary server with big size ..
    please advice what will be the action with stand by server.
    Thanks ,,

    Dear all:
    i have physical dataguard servers
    i want to add new redologs for primary server with
    big size ..
    please advice what will be the action with stand by
    server.
    Thanks ,,Most of these information are kept in the control file . You have to recreate your stand by control file after you added the new online redo logs. Transfer this to the standby server , with the the new online redo redo while you standby is down . Your standby system should be able to recognize it when it goes back to stand by mode.

  • Cannot log in to Primary BT email account

    Yesterday i changed my BT email password.  Afterwards i logged in successfully.  Now i cannot log in with either my new password.  there's a message on the log in page to say 'my username or password is not valid and to try again'.  Did that but still no joy. (even decided to have a go with my old password but made no use)
    Can somebody help me with this please?
    Solved!
    Go to Solution.

    You could try either calling BT here
    https://bt.custhelp.com/app/contact#h=eyJzdGVwMCI6ImNvbnRhY3RJdGVtTGluazU2MjciLCJzdGVwSWR4IjoxLCJzdG...
    and ask them to reset your password or try the "forgot password" here
    https://www.bt.com/managepassword/merged_consumer_journey/forgottenpasswordnavPC.do?decorator=merged...

  • How to find the deleted transport req logs.

    Hello gurus,
    in my DEV box some body was deleted the transport request.
    i need to know that by whom that request was deleted. the thing is that request was not released  and not imported into any system. that request was deleted after the creation.
    but now i am confused how to check the logs. i have searched the log file in trans directory and also SLOG,ALOG.
    i guess there is no use of import history in STMS because that request not existed in the system?
    then how can i find the deleted person. and time.
    Could you please do let me know.
    help would be heighly appreciated.
    Regards,
    srini.

    hi!
    Are you sure that the transport request was deleted?
    Perhaps Transport request is a local transport and when you release the transport it will not create the export files.
    FF

  • Enhancement Req: Logging of actions within SQL Developer

    I've always felt that was a deficiency of OEM - there was no way that I knew of to log what you did, especially DDL commands.
    I wish that SQL Developer would add that capability, for several reasons not the least of which is to be able to extract an ad-hoc commands to put in a script if you ever had to rebuild your schema.
    Another reason is to record your actions to validate what you had done.

    If you edit the sqldeveloper.conf file. and uncomment this line
    #AddVMOption -Daudit.trace=db_api
    You'll should see just about every sql we issue either on a message log or on the console.
    -kris

  • Logging on to Win 8.1 workstation with domain credentials

    Hi All.
    I been on Windows 8 Pro(now 8.1 update 1) for over a year now. Until now, I've always logged on to my workstation with my MS account. I recently decided to join my workstation to a domain where the Primary DC is running Server 2008 r2. I joined the domain
    without a hitch, but when I try to log on to the workstation using domain credentials, the logon screen seems to insist on a MS account. It wants user name to be in email form only. When I tried to use my domain credentials in that format ([email protected])
    it told me that the password is wrong and I should make sure to use my MS account password.
    I tried disconnecting my MS account from my local account, but it didn't help.
    Any ideas?

    I'm not sure if what you are doing is supported, to have a local MS sign-in account as well as a corporate domain account residing side by side, you might have to give up your MS sign-in and use a local ID for the domain logon to work
    you may however consider setting this up using the Workplace Join feature in 8.1 which should work much better
    http://blogs.technet.com/b/keithmayer/archive/2013/11/08/why-r2-step-by-step-solve-byod-challenges-with-workplace-join.aspx

  • Restored standby database from primary; now no logs are shipped

    Hi
    We recently had a major network/SAN issue and had to restore our standby database from a backup of the primary. To do this, we restored the database to the standby, created a standby controlfile on the primary, copied this across to the control file locations and started in standby recover and applied the logs manually/registered to get it back up to speed.
    However, no new logs are being shipped across from the primary.
    Have we missed a step somewhere?
    One thing we've noticed is that there is no RFS process running on the standby:
    SQL> SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
    PROCESS CLIENT_P SEQUENCE# STATUS
    ARCH ARCH 0 CONNECTED
    ARCH ARCH 0 CONNECTED
    MRP0 N/A 100057 WAIT_FOR_LOG
    How do we start this? Or will it only show if the arc1 process on the primary is sending files?
    The arc1 process is showing at OS level on the primary but I'm wondering if its faulty somehow?
    There are NO errors in the alert logs in the primary or the standby. There's not even the normal FAL gap sequence type error - in the standby it's just saying 'waiting for log' and a number from ages ago. It's like the primary isn't even talking to the standby. The listener is up and running ok though...
    What else can we check/do?
    If we manually copy across files and do an 'alter database register' then they are applied to the standby without issue; there's just no automatic log shipping going on...
    Thanks
    Ross

    Hi all
    Many thanks for all the responses.
    The database is 10.2.0.2.0, on AIX 6.
    I believe the password files are ok; we've had issues previously and this is always flagged in the alert log on the primary - not the case here.
    Not set to DEFER on primary; log_archive_dest_2 is set to service="STBY_PHP" optional delay=720 reopen=30 and log_archive_dest_state_2 is set to ENABLE.
    I ran those troubleshooting scripts, info from standby:
    SQL> @troubleshoot
    NAME DISPLAY_VALUE
    db_file_name_convert
    db_name PHP
    db_unique_name PHP
    dg_broker_config_file1 /oracle/PHP/102_64/dbs/dr1PHP.dat
    dg_broker_config_file2 /oracle/PHP/102_64/dbs/dr2PHP.dat
    dg_broker_start FALSE
    fal_client STBY_PHP
    fal_server PHP
    local_listener
    log_archive_config
    log_archive_dest_2 service=STBY_PHP optional delay=30 reopen=30
    log_archive_dest_state_2 DEFER
    log_archive_max_processes 2
    log_file_name_convert
    remote_login_passwordfile EXCLUSIVE
    standby_archive_dest /oracle/PHP/oraarch/PHParch
    standby_file_management AUTO
    NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
    PHP PHP MAXIMUM PERFORM PHYSICAL S MOUNTED
    ANCE TANDBY
    THREAD# MAX(SEQUENCE#)
    1 100149
    PROCESS STATUS THREAD# SEQUENCE#
    ARCH CONNECTED 0 0
    ARCH CONNECTED 0 0
    MRP0 WAIT_FOR_LOG 1 100150
    NAME VALUE UNIT TIME_COMPUTED
    apply finish time day(2) to second(1) interval
    apply lag day(2) to second(0) interval
    estimated startup time 8 second
    standby has been open N
    transport lag day(2) to second(0) interval
    NAME Size MB Used MB
    0 0
    On the primary, the script has froze!! How long should it take? Got as far as this:
    SQL> @troubleshoot
    NAME DISPLAY_VALUE
    db_file_name_convert
    db_name PHP
    db_unique_name PHP
    dg_broker_config_file1 /oracle/PHP/102_64/dbs/dr1PHP.dat
    dg_broker_config_file2 /oracle/PHP/102_64/dbs/dr2PHP.dat
    dg_broker_start FALSE
    fal_client STBY_R1P
    fal_server R1P
    local_listener
    log_archive_config
    log_archive_dest_2 service="STBY_PHP" optional delay=720 reopen=30
    log_archive_dest_state_2 ENABLE
    log_archive_max_processes 2
    log_file_name_convert
    remote_login_passwordfile EXCLUSIVE
    standby_archive_dest /oracle/PHP/oraarch/PHParch
    standby_file_management AUTO
    NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
    PHP PHP MAXIMUM PERFORMANCE PRIMARY READ WRITE SESSIONS ACTIVE
    THREAD# MAX(SEQUENCE#)
    1 100206
    NOW - before you say it - :) - yes, I'm aware that fal_client as STBY_R1P and fal_server as R1P are incorrect - should be PHP - but it looks like it's always been this way! Well, as least for the last 4 years where it's worked fine, as I found an old SP file and it still has R1P set in there...?!?
    Any ideas?
    Ross

  • Why do we need standby redo log on Primary database.

    Hi Gurus,
    I was going through the document in OBE,
    http://www.oracle.com/technology/obe/11gr1_db/ha/dataguard/physstby/physstdby.htm
    I have two queries:
    1) I noticed the statement -
    "Configure the primary database to receive redo data, by adding the standby logfiles to the primary. "
    Why do we have to create standby redo log on a primary database?
    2) There is another statement --
    "It is highly recommended that you have one more standby redo log group than you have online redo log groups as the primary database. The files must be the same size or larger than the primary database’s online redo logs. "
    Why do we need one additional standby redo log group than in Primary database.
    Could anyone please explain to me in simple words.
    Thanks
    Cherrish Vaidiyan

    Hi,
    1. Standby redo logs are used only when the database_role is standby, it is recommended to be added in primary also so that they can be used on role reversal, however during normal working standby redo logs will not be used at all on primary.
    2. In case of 3 online redo log groups, it is recommended to use 4 standby redo log group this is in case if log switching is happening frequently on primary and all 3 standby redo logs are still not completely archived on the standby and 4th can be used here as there will be some delay on standby due to network or slowness of arch on standby.
    Use of the standby redo log groups depends on the redo generation rate, you can see only 2 standby redo logs are getting used while you have 4 standby redo log groups, when the redo generation rate is less.
    So it is recommended to have one more standby redo log group when redo generation rate is high and all of the existing standby redo log group are getting used.
    Regards
    Anudeep

  • When changing redo-logs on Primary, do I need to change my standby also?

    Hi, I need to resize my redo logs on a primary db which has a physical standby attached. Do I need to modify this standby or will the change propagate. It seems unlikely to me.
    thanx

    Only clue that I have found is at
    http://download-east.oracle.com/docs/cd/B12037_01/server.101/b10726/configbp.htm
    Section "Use Multiplexed Standby Redo Logs and Configure Size Appropriately"
    >>
    The remote file server (RFS) process for the standby database writes only to an SRL whose size is identical to the size of an online redo log for the production database. If it cannot find an appropriately sized SRL, then RFS creates an archived redo log file directly instead and logs the following message in the alert log:
    No standby redo log files of size <#> blocks available.
    >>

  • How do I use Primary Key and RowID in Materialized View Logs and MVs

    How do I use Primary Key and RowID in Materialized View Logs and Materialized Views????
    I don’t understand in the Materalized View Logs the diference between Primary Key and RowID. Besides, I could choose both Primary Key and RowID.
    When I have to use RowID?? Why?? And Primary Key??? And both, Primary Key and RowID????
    Thank you very much!

    Yes, I have already read it...
    But for example I don’t Understand:
    This is the example 8-1
    CREATE MATERIALIZED VIEW LOG ON products
    WITH SEQUENCE, ROWID
    (prod_id, prod_name, prod_desc, prod_subcategory, prod_subcat_desc, prod_
    category, prod_cat_desc, prod_weight_class, prod_unit_of_measure, prod_pack_
    size, supplier_id, prod_status, prod_list_price, prod_min_price)
    INCLUDING NEW VALUES;
    But if I create a Materialized View with TOAD if I choose a KEY field I receive the error:
    ORA-12026: invalid filter column detected
    Then I have to take out the Key (in the above example prod_id)
    And then the script is
    CREATE MATERIALIZED VIEW LOG ON products
    WITH ROWID, SEQUENCE, PRIMARY KEY!!!!!!!!!!!!!!!!!!!!
    (prod_id, prod_name, prod_desc, prod_subcategory, prod_subcat_desc, prod_
    category, prod_cat_desc, prod_weight_class, prod_unit_of_measure, prod_pack_
    size, supplier_id, prod_status, prod_list_price, prod_min_price)
    INCLUDING NEW VALUES;
    I have PRIMARY KEY in the definition (I don’t choose it) and I don’t have the prod_id field
    Why is it????
    Note: If I execute the script to create the MV Log manually the PRIMARY KEY option NO IS in the script and the prod_id field either is in the script.
    And on the other hand,
    What is this:
    CREATE MATERIALIZED VIEW LOG ON sales
    WITH ROWID;
    CREATE MATERIALIZED VIEW LOG ON times
    WITH ROWID;
    CREATE MATERIALIZED VIEW LOG ON customers
    WITH ROWID;
    These MATERIALIZED VIEW LOG contain any fields????
    Or it contain the primary key fields of this tables (sales, times and customers)??? Then, Why is it ROWID instead of PRIMARY KEY????
    Thanks!

Maybe you are looking for

  • Mdm workflow through sap r/3

    hello experts I have a scenario to trigger a workflow in SAP R/3 from MDM Objects need to be created in SAP for this I found this in a pdf file "A pre-requisite for connecting SAP Business Workflow to MDM is to build a BOR Object which represents the

  • ITunes store not loading with iOS 7 on iPhone 4S

    iTunes store not loading with iOS 7 on iPhone 4S. I have tried reset network connections I have restored I have backuped I have erased all content I have rebooted I have DFUed. I have also the same issue with iTunes on pc, with -50 error. Is this an

  • Looking for License Mac CUCM version 6

    Hi I am trying to find the licesne mac address for CUCM version 6 Usually with show status the license is there But now i cant find it

  • Photo color calibration?

    Hi, I took some pics recently and I found that the color calibration was pretty poor. I tried to take a dark red flower on a the photo it appears a glaring pink hue, it's just plain ugly... Is there any way to have a more natural color calibration ?

  • Adobe Connect Mobile issues on Motorola RAZR

    While trying your Adobe® Connect™ Mobile application (version 1.7.5) on one of our latest Android-powered handsets--the Motorola RAZR (XT910)--we encountered a couple of issues that we thought you should know about. The text on the "whiteboard" will