Best way to config GWIA

I have a SLES 10 SP2 server running GW703HP4. I know I can edit the GWIA properties using C1 but I also see the GWIA configuration file under /opt/novell/groupwise/agents/share/gwia.cfg.
When GWIA starts on the SLES server, how does it know what parameter to use? Does it read eDirectory object first then the gwia.cfg file? If there is a conflict in the eDirectory setup vs the gwia.cfg, which one wins?
What is the best practice to configure GWIA settings on SLES?
Thanks in advance.
Wilson

So I should comment out the change I make in gwia.cfg and then use C1 to modify the GWIA properties, right?
I have to restart GWIA after I edit the gwia.cfg file. If I change the GWIA file and update GWIA properties in C1, I think GWIA will read the change automatically and I don't need to restart GWIA manually, right?
Thanks for your help:)
Wilson
Originally Posted by mrosen
Hi,
wilsonhandy wrote:
>
> I have a SLES 10 SP2 server running GW703HP4. I know I can edit the GWIA
> properties using C1 but I also see the GWIA configuration file under
> /opt/novell/groupwise/agents/share/gwia.cfg.
>
> When GWIA starts on the SLES server, how does it know what parameter to
> use?
Gwia.cfg overrides the C1 settings.
> Does it read eDirectory object first then the gwia.cfg file? If
> there is a conflict in the eDirectory setup vs the gwia.cfg, which one
> wins?
Just a side note: There is absolutely no part of GW configuration in
eDirectory. The GW configuration you see in ConsoleOne is stored in the
Groupwise databases, not eDir.
> What is the best practice to configure GWIA settings on SLES?
The OS doesn't matter. The best practice is to leave gwia.cfg empty
(except the paths). Otherwise, what you see in C1 might not reflect
reality (C1 ignored gwia.cfg entirely), and chnages oyu make in C1 will
never take effect, if they're also configured in gwia.cfg.
CU,
Massimo Rosen
Novell Product Support Forum Sysop
No emails please!
Untitled Document

Similar Messages

  • Best way to change config then change it back after a while.

    I spent some time today trying a couple approaches to this, but they all seemed a little klunky.
    I'm looking for the best way to change the BGP prefixes which are advertised out to an ISP, based on some check.  I want to set a timeout so the router won't attempt to send the route again for say 30 minutes after it is triggered, but then will start advertising it again, and monitor to see if the trigger condition returns.  If the trigger condition returns then again withdrawl the route for 30 minutes and so on.
    I'm using a prefix-list already to limit outbound route advertisments, so it seems simplest to just make a config change to remove one line in the prefix-list, then a few minutes later put it back.
    I tried just using the "cli command wait", but if I set the wait period too long, the applet seemed to die, and never ran the later cli commands to put the prefix-list line back.  There is also a exit-time clause for the event, but I couldn't figure out how to put the line back after the exit-time expired.  Lastly I tried doing an event with a watchdog timer, but also couldn't get that to work either.  Before I spend too much time working on differant options, I wanted to see if anyone had any recommendations.
    I've done some TCL scripting on Cisco routers, but that seemed to be overkill for this, and I wanted to keep the config easy to manage for peers who might not be as proficient in TCL scripting.
    This is intended for ASR-1002X routers if it matters.
    Any suggestions would be much appreciated.
    Thanks
    Derek

    Thanks for all your help Joe. 
    Ok, so here is my current script, which seems to be working pretty well (changing to entry-type "value" fixed the variability in detection times).  For testing in the script below, I'm using a 30 second timeout for when the line gets put back, and a 60 second timeout for when monitoring should resume after the event is triggered. The script checks the value of the OID every 5 seconds.
    The only other thing I would like to do with it that I can't figure out, is how to use an environment variable for the exit-time.  Ideally, I would just add a value, like 10 seconds, to the ATimeout variable.  However I can't figure out the syntax to just use a var for the exit-time.  Anyone know the secret (or if it is possible?)
    event manager environment ATimeout 30
    event manager environment q "
    no event manager applet DDOS_RESPONSE01
    event manager applet DDOS_RESPONSE01
    event snmp oid 1.3.6.1.4.1.9.9.166.1.17.1.1.21.80.65538 get-type exact entry-op gt entry-val "0" entry-type value exit-time 60 poll-interval 5
    trigger
    action 001 cli command "enable"
    action 002 cli command "config term"
    action 003 cli command "no ip prefix-list PUBLIC_NETWORKS seq 140 permit 10.4.1.0/24 le 32"
    action 004 syslog msg "DDoS Attack Detected. Removing Web Srvr Subnet from PUBLIC_NETWORKS for ($ATimeout) seconds."
    action 005 cli command "event manager applet RESTORE_PREFIX"
    action 006 cli command "event timer countdown time $ATimeout "
    action 007 cli command "action 101 cli command $q enable $q"
    action 008 cli command "action 102 cli command $q config term $q"
    action 009 cli command "action 103 cli command $q no event manager applet RESTORE_PREFIX $q"
    action 010 cli command "action 104 cli command $q ip prefix-list PUBLIC_NETWORKS seq 140 permit 10.4.1.0/24 le 32$q"
    action 011 cli command "action 105 syslog msg $q DDoS Attack Timeout ($ATimeout) reached. Re-adding Web Srvr Subnet to PUBLIC_NETWORKS. $q "
    action 012 cli command "action 106 cli command $q no event manager applet RESTORE_PREFIX $q"
    exit
    event manager environment ATimeout 30
    event manager environment q "
    event manager applet DDOS_RESPONSE01
    event snmp oid 1.3.6.1.4.1.9.9.166.1.17.1.1.21.80.65538 get-type exact entry-op gt entry-val "0" entry-type value exit-time 60 poll-interval 5
    trigger
    action 001 cli command "enable"
    action 002 cli command "config term"
    action 003 cli command "no ip prefix-list PUBLIC_NETWORKS seq 140 permit 10.4.1.0/24 le 32"
    action 004 syslog msg "DDoS Attack Detected. Removing Web Srvr Subnet from PUBLIC_NETWORKS for ($ATimeout) seconds."
    action 005 cli command "event manager applet RESTORE_PREFIX"
    action 006 cli command "event timer countdown time $ATimeout "
    action 007 cli command "action 101 cli command $q enable $q"
    action 008 cli command "action 102 cli command $q config term $q"
    action 009 cli command "action 103 cli command $q no event manager applet RESTORE_PREFIX $q"
    action 010 cli command "action 104 cli command $q ip prefix-list PUBLIC_NETWORKS seq 140 permit 10.4.1.0/24 le 32$q"
    action 011 cli command "action 105 syslog msg $q DDoS Attack Timeout ($ATimeout) reached. Re-adding Web Srvr Subnet to PUBLIC_NETWORKS. $q "
    action 012 cli command "action 106 cli command $q no event manager applet RESTORE_PREFIX $q"
    exit

  • Best way to reload an asa config?

    Hello,
    I've been thinking of 2 scenarios that could happen and I woud like to be ready.  If a config error was made on our ASA (we have 2 in active/standby mode) what is the best way to recover assuming we have a tftp backup or local flash copy?  I know their is a config replace option on routers/switches that will compare the running config and the tftp/local copy and then replace the changes to get you back online without a reload.
    Also if we had to replace one of the ASA as it was faulty, I guess I would tftp the config but what abount the license keys?
    Any thoughts/experience would be mos welcome.
    Thanks

    If the mistake is not small enough that you can simply undo the commands with "no ___" then copy the backup file to running-config and write mem to further copy it into the startup-config. Local flash copy will always be faster than tftp but either is of course erasable as well. I'd start with a local copy if available and then fall back to a remote copy where it's not.
    The license keys (technically activation keys on an ASA) need to be generated for you by the TAC in the event of an RMA. Of course if the non-failed unit has the necessary licenses (in 8.3+) you don't also need to add them on the replacement unit as a HA pair shares most licenses (with a few exceptions like Security Plus which is a prerequisite to even enable failover on a 5505 or 5510 or 5512-X).

  • Which one is the best way to collect config and performance details in azure

    Hi ,
    I want to collect the information of both configuration and performance of cloud, virtual machine and web role .I am going to collect all these details using
    java.  so Please suggest which one is the best way. 
    1) REST API
    2) Azure SDK for java
    Regards
    Rathidevi
    rathidevi

    Hi,
    There are four main tasks to use Azure Diagnostics:
    Setup WAD
    Configuring data collection
    Instrumenting your code
    Viewing data
    The original Azure SDK 1.0 included functionality to collect diagnostics and store them in Azure storage collectively known as Azure Diagnostics (WAD). This software, built upon the Event Tracing for Windows (ETW) framework, fulfills two design requirements
    introduced by Azure scale-out architecture:
    Save diagnostic data that would be lost during a reimaging of the instance..
    Provide a central repository for diagnostics from multiple instances.
    After including Azure Diagnostics in the role (ServiceConfiguration.cscfg and ServiceDefinition.csdef), WAD collects diagnostic data from all the instances of that particular role. The diagnostic data can be used for debugging and troubleshooting, measuring
    performance, monitoring resource usage, traffic analysis and capacity planning, and auditing. Transfers to Azure storage account for persistence can either be scheduled or on-demand.
    To know more about Azure Diagnostics, please refer to the below article ( Section : Designing More Supportable Azure Services > Azure Diagnostics )
    https://msdn.microsoft.com/en-us/library/azure/hh771389.aspx?f=255&MSPPError=-2147217396
    https://msdn.microsoft.com/en-us/library/azure/dn186185.aspx
    https://msdn.microsoft.com/en-us/library/azure/gg433048.aspx
    Hope this helps !
    Regards,
    Sowmya

  • What is the best way to share a iTunes library between 5 users on the same computer?

    What is the best way to share a iTunes library between 5 users on the same computer?
    Currently I have a common itunes library in a shared folder that each user is linked to.  The problem is that when one user adds music to the shared file, it does not automaticaly appear to other users when they open their iTunes.  They have to go to the shared library and select the new music in order to make it visable in their iTunes.
    Would iTunes file shareing fix this problem?
    Thanks

    aapl.up wrote:
    Rick, are you saying that if you try to share a folder outside of home and use sharepoints, then the other mac wont prompt you to log in? Have you tried that? It is hard to believe that is the case
    I am sort of saying that. Have you tried it? I know it's hard to believe, but you really need to help us out by trying some steps that you don't believe.
    We can't see what your exact situation is. We sort of know what's worked for us out here. Just trying to help you get this going in less than 24 hours!
    Sharepoints manages the samba configuration file. It's free software that puts a pretty face on an otherwise cumbersome config file.
    Give it a shot.
    windows will not prompt for passwords, I have tried this with multiple computer at home for both vista and xp
    As I stated. You will not get prompted on windows, if you have set up sharing in an insecure fashion. You are not using a secured sharing situation.
    No argument with your statement.

  • What is the best way to implement default values stored in a DB table?

    [JHeadstart 10.1.3 build 78]
    [JDeveloper 10.1.3 SU4]
    We are struggling on how to best implement default values that are stored in a DB table. What we have is a database table with (CODE_TYPE, TABLE_NAME, COLUMN_NAME, DEFAULT_VALUE) as columns. This way the application administrator can administer default values himself/herself. Now we need to find the best way to set these table supplied default values in new rows. Globally we are aware of two ways:
    - override create() method on VO
    - probably create a View Object on top of the database table with default values (we are capable of transposing the table and return exactly one row with a column for each default value) and use JHeadstarts item property 'Default Value'.
    We prefer the latter, since this is more declarative, however we struggle with the EL expression needed to indicate the default value.
    If we have a VO named "DefaultValues" with a SELECT on a view on top of our database table (transposed) returning exactly one row, let us say:
    SELECT orglanguage, orgtype, orgstatus [...]
    FROM v_default_values
    --> returning exactly one row
    and we want an EL expression on an item that needs the value from orglanguage. What will the EL expression be? Something like:
    #{data.DefaultValuesPageDef.currentrow.orglanguage.inputValue}? We tried several things but they do not work. A static default value works, but every EL expression so far does not. We know that using "data" can be dangerous, but thought JHeadstart takes care of preparing the other Page Definitions, so it might be possible when you use JHeadstart.
    Or is overriding the create() method the preferred way? Or do we have to look at a Managed Bean for our default values that we refer to from EL (let us say MyAppDefaultValuesBean) and in that case: how do you associate a Managed Bean with a VO?
    Any help would be appreciated. Apart from these default values, things are going rather well in this first J2EE/JHS project for us!
    Toine

    Steven,
    Thanks for the reply. Unfortunately whatever we try, we cannot get it to work. We started looking at the second option (since we do need default values also in table lay-out new rows). We created a DefaultValues ViewObject, added it to the Application Module, added an EL expressiona to the Default Display Value property (replacing your ending ")" with a "}" offcourse ;-)), ran the JAG so that a page definition was generated for DefaultValues and we managed to get it prepared when loading for example the Organisation's jspx page. However no default value appears in a new row (not in Form, not in Table layout).
    I then created a quick application on top of the HR schema, added a DefaultValues ViewObject using one calculated attribute (set Salary fixed to 1000), added the EL expression to the Salary Default Display Value property in the Employees Group, made sure the DefaultValuesPageDef is prepared by adding it to the parameter section and I see it getting prepared. I also see a managed Bean is created in the Employees-bean.xml.
    In the Embedded OC4J log we see:
    16:01:01 DEBUG (JhsPageLifecycle) -executing onCreate
    16:01:01 DEBUG (JhsPageLifecycle) -CreateEmployeesDefaultValues bean found, applying default values to new row
    2006-08-02 16:01:01.825 WARNING [ADFc] Warning: No Method onCreateEmployees and no actionBinding CreateEmployees found.
    Is it this warning we should be worried about? Since no default value is created.
    The managed bean (Employees-beans.xml) looks like:
    <?xml version="1.0" encoding="windows-1252"?>
    <!DOCTYPE faces-config PUBLIC
    "-//Sun Microsystems, Inc.//DTD JavaServer Faces Config 1.1//EN"
    "http://java.sun.com/dtd/web-facesconfig_1_1.dtd">
    <faces-config xmlns="http://java.sun.com/JSF/Configuration">
    <managed-bean>
    <managed-bean-name>CreateEmployeesDefaultValues</managed-bean-name>
    <managed-bean-class>oracle.jheadstart.controller.jsf.bean.DefaultValuesBean</managed-bean-class>
    <managed-bean-scope>request</managed-bean-scope>
    <managed-property>
    <property-name>iteratorBinding</property-name>
    <value>#{bindings.EmployeesIterator}</value>
    </managed-property>
    <managed-property>
    <property-name>defaultValues</property-name>
    <map-entries>
    <map-entry>
    <key>Salary</key>
    <value>#{data.DefaultValuesPageDef.DefaultValuesIterator.currentRow.Salary}</value>
    </map-entry>
    </map-entries>
    </managed-property>
    <managed-property>
    <property-name>actionResult</property-name>
    <value>CreateEmployees</value>
    </managed-property>
    </managed-bean>
    <managed-bean>
    <managed-bean-name>searchEmployees</managed-bean-name>
    <managed-bean-class>oracle.jheadstart.controller.jsf.bean.JhsSearchBean</managed-bean-class>
    <managed-bean-scope>session</managed-bean-scope>
    <managed-property>
    <property-name>bindings</property-name>
    <value>#{data.EmployeesPageDef}</value>
    </managed-property>
    <managed-property>
    <property-name>searchBinding</property-name>
    <value>#{data.EmployeesPageDef.advancedSearchEmployees}</value>
    </managed-property>
    <managed-property>
    <property-name>searchAttribute</property-name>
    <value>EmployeeId</value>
    </managed-property>
    <managed-property>
    <property-name>dataCollection</property-name>
    <value>EmployeesView1</value>
    </managed-property>
    <managed-property>
    <property-name>autoquery</property-name>
    <value>true</value>
    </managed-property>
    </managed-bean>
    <managed-bean>
    <managed-bean-name>EmployeesCollectionModel</managed-bean-name>
    <managed-bean-class>oracle.jheadstart.controller.jsf.bean.JhsCollectionModel</managed-bean-class>
    <managed-bean-scope>request</managed-bean-scope>
    <managed-property>
    <property-name>jhsPageLifecycle</property-name>
    <value>#{jhsPageLifecycle}</value>
    </managed-property>
    <managed-property>
    <property-name>bindings</property-name>
    <value>#{bindings}</value>
    </managed-property>
    <managed-property>
    <property-name>rangeBinding</property-name>
    <value>#{bindings.EmployeesTable}</value>
    </managed-property>
    <managed-property>
    <property-name>defaultValues</property-name>
    <value>#{CreateEmployeesDefaultValues.defaultValues}</value>
    </managed-property>
    </managed-bean>
    </faces-config>
    This is the DefaultValues.xml:
    <?xml version='1.0' encoding='windows-1252' ?>
    <!DOCTYPE ViewObject SYSTEM "jbo_03_01.dtd">
    <ViewObject
    Name="DefaultValues"
    BindingStyle="OracleName"
    CustomQuery="true"
    ComponentClass="hr.model.DefaultValuesImpl"
    UseGlueCode="false" >
    <DesignTime>
    <Attr Name="_version" Value="10.1.3.36.73" />
    <Attr Name="_codeGenFlag2" Value="Access|Coll|VarAccess" />
    </DesignTime>
    <ViewAttribute
    Name="Salary"
    IsUpdateable="false"
    IsPersistent="false"
    Precision="255"
    Type="java.lang.String"
    ColumnType="VARCHAR2"
    AliasName="SALARY"
    Expression="1000"
    SQLType="VARCHAR" >
    </ViewAttribute>
    </ViewObject>
    The PageDef for Defaultvalues is like:
    <?xml version="1.0" encoding="UTF-8" ?>
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel"
    version="10.1.3.36.73" id="DefaultValuesPageDef"
    Package="hr.view.pagedefs" EnableTokenValidation="false">
    <parameters/>
    <executables>
    <iterator id="DefaultValuesIterator"
    Binds="AppModuleDataControl.DefaultValues1"
    DataControl="AppModuleDataControl" RangeSize="10"/>
    </executables>
    <bindings>
    <attributeValues id="DefaultValuesSalary"
    IterBinding="DefaultValuesIterator">
    <AttrNames>
    <Item Value="Salary"/>
    </AttrNames>
    </attributeValues>
    <table id="DefaultValuesTable" IterBinding="DefaultValuesIterator">
    <AttrNames>
    <Item Value="Salary"/>
    </AttrNames>
    </table>
    <action id="FirstDefaultValues" IterBinding="DefaultValuesIterator"
    DataControl="AppModuleDataControl" RequiresUpdateModel="true"
    Action="12"/>
    <action id="PreviousDefaultValues" IterBinding="DefaultValuesIterator"
    DataControl="AppModuleDataControl" RequiresUpdateModel="true"
    Action="11"/>
    <action id="NextDefaultValues" IterBinding="DefaultValuesIterator"
    DataControl="AppModuleDataControl" RequiresUpdateModel="true"
    Action="10"/>
    <action id="LastDefaultValues" IterBinding="DefaultValuesIterator"
    DataControl="AppModuleDataControl" RequiresUpdateModel="true"
    Action="13"/>
    <methodAction RequiresUpdateModel="true" Action="999"
    id="advancedSearchDefaultValues"
    IterBinding="DefaultValuesIterator"
    DataControl="AppModuleDataControl"
    InstanceName="AppModuleDataControl.dataProvider"
    MethodName="advancedSearch"
    ReturnName="AppModuleDataControl.methodResults.AppModuleDataControl_dataProvider_advancedSearch_result"
    IsViewObjectMethod="false">
    <NamedData NDName="viewObjectUsage"
    NDValue="#{searchDefaultValues.dataCollection}"
    NDType="java.lang.String"/>
    <NamedData NDName="arguments" NDValue="#{searchDefaultValues.arguments}"
    NDType="java.util.ArrayList"/>
    <NamedData NDName="allConditionsMet"
    NDValue="#{searchDefaultValues.allConditionsMet}"
    NDType="java.lang.Boolean"/>
    </methodAction>
    <action id="setCurrentRowWithKeyDefaultValues"
    IterBinding="DefaultValuesIterator"
    InstanceName="AppModuleDataControl.DefaultValues1"
    DataControl="AppModuleDataControl" RequiresUpdateModel="false"
    Action="96">
    <NamedData NDName="rowKeyStr" NDValue="#{row.rowKeyStr}"
    NDType="java.lang.String"/>
    </action>
    <action id="CreateDefaultValues" IterBinding="DefaultValuesIterator"
    DataControl="AppModuleDataControl" RequiresUpdateModel="true"
    Action="40"/>
    <action id="DeleteDefaultValues" IterBinding="DefaultValuesIterator"
    DataControl="AppModuleDataControl" RequiresUpdateModel="false"
    Action="30"/>
    <action id="Commit" RequiresUpdateModel="true" Action="100"
    DataControl="AppModuleDataControl"/>
    <action id="Rollback" RequiresUpdateModel="false" Action="101"
    DataControl="AppModuleDataControl"/>
    </bindings>
    </pageDefinition>
    We do not understand what is wrong and why the default values do not get created in the new rows (and it is taking us far too much time). Any chance the EL expression is still wrong? It is a shame that any syntax errors in EL expressions are not visible in some logfile. It looks like when EL expressions are wrong, they are ignored instead of raising an error...
    Toine

  • What's the best way to load balance multiple protocols on one vserver?

    Hi,
    We have a CSM blade on a 6513, in bridge mode. I'm just wondering what is the best way to serve HTTP and HTTPS (or any two or more ports) from the same group of servers. As I see it, we have two options:
    1. Don't set a port on the vserver, so it is load balancing "any" or "tcp". This is easy but I want to be sure there isn't a downside to this, other than the obvious security issue.
    2. Create multiple vservers and point them at the same serverfarm. I tried this and I got some odd results with the health checks.
    Any ideas? Thanks a lot.

    you listed the only 2 options available.
    The advantage of solution #2 is that you can apply specific config for each protocol ie: for HTTP you can turn 'persistent rebalance' if needed.
    If you want to use specific probes [not icmp], it is also a good practice to create a different serverfarm for each protocol.
    Like this, if the HTTP service goes down but not the server, you can still have other protocols loadbalanced.
    Regards,
    Gilles.
    Thanks for rating this answer.

  • What is the best way to deploy/update custom security realm classes to WLS 6.0?

    From the WLS 6.0 console, I see that I can specify the Java class that
    implements my custom security realm but I am wondering what is the best way
    to deploy/update this code. I don't see a way to do this from the console.
    Does this mean that I have to manually copy the class files over that
    implement my custom security realm?

    Thanks Danut,
    A jar file seems to be a good way to package it up but it sounds like it
    still needs to be manually copied to each Weblogic server install directory
    post-installation and whenever it is updated. I thought it would be nice to
    be able to deploy/update the custom security realm by uploading it through
    the Console just as you can with web applications and EJBs.
    Brian
    "Danut Prisacaru" <[email protected]> wrote in message
    news:3aba2db0$[email protected]..
    You have to have your Custom Realm class in the class path. I usually havea
    jar file with all the Custom Realm classes and that jar I copy it in thelib
    folder. Then I modify "startWebLogic.cmd" and I add to the classpath
    ".\lib\CustomRealm.jar"
    set
    CLASSPATH=.;.\lib\weblogic_sp.jar;.\lib\weblogic.jar;.\lib\CustomRealm.jar;
    >
    Be aware that in order to have you custom realm besides creating thecustom
    realm using the console you also have to create a custom caching andchoose
    that one as your default caching realm.
    Here is how the security settings are looking in my "config.xml"
    <CustomRealm Name="CustomRealm"
    RealmClassName="Custom.appserver.weblogic.security.CustomRealm"/>
    <CachingRealm BasicRealm="CustomRealm" CacheCaseSensitive="true"
    Name="CustomCachingRealm"/>
    <Realm CachingRealm="CustomCachingRealm" FileRealm="wl_default_file_realm"
    Name="wl_default_realm"/>
    <FileRealm Name="wl_default_file_realm"/>
    <Security GuestDisabled="false"
    Name="mydomain" PasswordPolicy="wl_default_password_policy"
    Realm="wl_default_realm"/>
    Danut

  • Best way for Java/C++/C# to share data in a cache?

    I have an order processing application which listens for Order objects to be inserted in a cache. If I want Java, C# and C++ apps to be able to submit orders to that cache, what's the best way to set up that Order object? Orders currently have many Enum member variables. What happens when a C# or C++ app needs to put an Order object in the cache? how would it set those java enums? Also, the java Enum classes have a lot of Java specific code in 'em for convenience. I imagine for cross platform simplicity it might have been best if the Order object were just an array of Strings or a Map of Strings to values but I have too much code depending on the Order object being how it is currently. Should I extract an Order Interface? What about the Enums? My java enums aren't simple {RED,GREEN,BLUE} type enums, they contain references to other Enums, etc. - see below...
    A portion of my Order class looks like:
    public class Order implements Cloneable, Comparable, java.io.Serializable {
      private static Logger logger = Logger.getLogger(Order.class);
      private boolean clearedFromOpenOrdersTable = false; 
      private boolean trading_opened = false;
      private static Random generator = new Random();
      private static int nextID = generator.nextInt(1000000); //just for testing
      private int quantity = 0;
      private int open = 0;
      private int executed = 0;
      private int last = 0;
      private int cancelPriority = 0;
      private Integer sendPriority = 0;
    //enums
      private OrderSide side = OrderSide.BUY;
      private OrderType orderType = OrderType.MARKET;
      private OrderTIF tif = OrderTIF.DAY;
      private OrderStatus orderStatus = OrderStatus.PENDING_NEW;
      private OrderExchange orderExchange = null;
      private OOType ooType = OOType.NOTOO;
      private OOLevel ooLevel = OOLevel.NONE;
      private Float limit = new Float(0);
      private Float stop = null;
      private float avgPx = 0.0f;
      private float lastPx = 0.0f;
      private String account = null;
      private String symbol = null;
      private long submitTimestamp;
      private long fillTime = 0;
      private long ackTime = 0;
      private Timestamp submitSqlTimestamp;
      public /*final*/ static NamedCache cache;
      private ArrayList<OrderStatusChangeListener> statusChangeListeners =
        new ArrayList<OrderStatusChangeListener>();
      transient private Format formatter = new SimpleDateFormat("hh:mm:ss a");
      public static void connectToCache() {
        cache = CacheFactory.getCache("orders");
      public void send() {
        this.submitTimestamp = System.currentTimeMillis() ;
        this.submitSqlTimestamp = new Timestamp(submitTimestamp);
        cache.put(this.ID, this);
      public void setCancelCount(int i) {
        cancelCount = i;
      public int getCancelCount() {
        return cancelCount;
      public static class CancelProcessor extends AbstractProcessor {
        public Object process(InvocableMap.Entry entry) {
          Order o = (Order)entry.getValue();
          if (o.cancelCount == 0) {
            o.cancelCount = 1;
          } else {
            logger.info("ignoring dup. cancel req on " + o.symbol + " id=" + o.ID);
          return o.cancelCount;
      public void cancel() {
          // cache.invoke(this.ID, new CancelProcessor() ); // must this be a 'new' one each time?
          Filter f = new EqualsFilter("getCancelCount", 0);
          UpdaterProcessor up = new UpdaterProcessor("setCancelCount",new Integer(1) );
          ConditionalProcessor cp = new ConditionalProcessor(f, up);
          cache.invoke(ID, cp);
      NumberIncrementor ni1 = new NumberIncrementor("CancelCount", 1, false);
      public void cancelAllowingMultipleCancelsOfThisOrder() {
        System.out.println("cancelAllowingMultipleCancelsOfThisOrder symbol=" + symbol + " id=" + ID);
        cache.invoke(this.getID(), ni1);
      public Timestamp getSubmitSqlTimestamp(){
        return submitSqlTimestamp;
      boolean isWorking( ) {
           // might need to write an extractor to get this from the cache atomically
           if (orderStatus != null &&
                (orderStatus == OrderStatus.NEW || orderStatus == OrderStatus.PARTIALLY_FILLED ||
                    // including PENDING_CANCEL breaks order totals from arcadirect
                    // because they send a pending cancel unlike foc
                    // os.getValue() == OrdStatus.PENDING_CANCEL ||
                    orderStatus == OrderStatus.PENDING_NEW ||
                    orderStatus == OrderStatus.PENDING_REPLACE)) {
              return true;
            } else {
              return false;
      public long getSubmitTimestamp(){
        return submitTimestamp;
      private void fireStatusChange( ) {
              for (OrderStatusChangeListener x:statusChangeListeners) {
                    try {
                         x.dispatchOrderStatusChange(this );
                    } catch (java.util.ConcurrentModificationException e) {
                         logger.error("** fireStatusChange: ConcurrentModificationException "+e.getMessage());
                         logger.error(e.getStackTrace());
                         e.printStackTrace();
      public Order() {
          ID = generateID();
          originalID = ID;
      public Object clone() {
          try {
            Order order = (Order)super.clone();
            order.setOriginalID(getID());
            order.setID(order.generateID());
            return order;
          } catch (CloneNotSupportedException e) {
          return null;
        class ReplaceProcessor extends AbstractProcessor {
          // this is executed on the node that owns the data,
          // no network access required
          public Object process(InvocableMap.Entry entry) {
            Order o = (Order)entry.getValue();
            int counter=0;
            float limit=0f;
            float stop=0f;
            int qty=o.quantity;
            boolean limitChanged=false, qtyChanged=false, stopChanged=false;
            for (Replace r:o.replaceList) {
              if (r.pending) {
                counter++;
                if (r.limit!=null) {limit=r.limit; limitChanged=true;}
                if (r.qty!=null) {qty=r.qty; qtyChanged=true;}
                if (r.stop!=null) {stop=r.stop; stopChanged=true;}
            if (limitChanged) o.limit=limit;
            if (qtyChanged) o.quantity=qty;
            if (stopChanged) o.stop=stop;
            if (limitChanged || qtyChanged || stopChanged)
            entry.setValue(o);
            return counter;
      public void applyPendingReplaces() {
         cache.invoke(this.ID, new ReplaceProcessor());
      private List <Replace>replaceList;
      public boolean isReplacePending() {
        if (replaceList==null) return false; 
        for (Replace r:replaceList) {
          if (r.pending==true) return true;
        return false;
      class ReplaceAddProcessor extends AbstractProcessor {
        // this is executed on the node that owns the data,
        // no network access required
        Replace r;
        public ReplaceAddProcessor(Replace r){
          this.r=r;
        public Object process(InvocableMap.Entry entry) {
          Order o = (Order)entry.getValue();
          if (o.replaceList==null) o.replaceList = new LinkedList();
          o.replaceList.add(r);
          return replaceList.size();
      public void replaceOrder(Replace r) {
          //  Order.cache.invoke(this.ID, new ReplaceAddProcessor(r));
          Order o = (Order)Order.cache.get(this.ID);
          if (o.replaceList==null) o.replaceList = new LinkedList();
          o.replaceList.add(r);
          Order.cache.put(this.ID, o);
      public boolean isCancelRequestedOrIsCanceled() {
        // change to cancelrequested lock, not ack lock
        if (canceled) return true;
    //  ValueExtractor extractor = new ReflectionExtractor("getCancelCount");
    //  int cc = (Integer)cache.invoke( this.ID , extractor );
        Order o = (Order)cache.get(this.ID);
        int cc = o.getCancelCount();
        return cc > 0 || o.isCanceled();
        class Replace implements Serializable{
        boolean pending=true;
        public Float limit=null;
        public Float stop=null;
        public Integer qty=null;
        public Replace() {
      }and then a portion of my OrderExchange.java Enum looks like this:
    class SymbolPair implements java.io.Serializable {
      String symbol;
      String suffix;
      SymbolPair(String symbol, String suffix) {
        this.symbol = symbol;
        this.suffix = suffix;
      public boolean equals(Object o) {
        SymbolPair x = (SymbolPair)o;
        return (this.symbol == x.symbol && this.suffix == x.suffix);
      public int hashCode() {
        return (symbol + "." + suffix).hashCode();
      public String toString() {
        if (suffix == null)
          return symbol;
        return symbol + "." + suffix;
    public enum OrderExchange implements java.io.Serializable {
      SIM("S", false, '.', OrderTIF.DAY) {
        public String getStandardizedStockSymbol(String symbol, String suffix) {
          return symbol + "." + suffix;
        public SymbolPair getExchangeSpecificStockSymbol(String symbol) {
          return new SymbolPair(symbol, null);
      TSX("c", false, '.', OrderTIF.DAY) {
        public String getStandardizedStockSymbol(String symbol, String suffix) {
          String x = externalSymbolPairToInternalSymbolMap_GS.get(new SymbolPair(symbol, suffix));
          return x == null ? symbol : x;
        public SymbolPair getExchangeSpecificStockSymbol(String symbol) {
          SymbolPair sa = internalSymbolToExternalSymbolPairMap_GS.get(symbol);
          return sa == null ? new SymbolPair(symbol, null) : sa;
      NYSE("N", false, '.', OrderTIF.DAY) {
        public String getStandardizedStockSymbol(String symbol, String suffix) {
          String x = externalSymbolPairToInternalSymbolMap_GS.get(new SymbolPair(symbol, suffix));
          return x == null ? symbol : x;
        public SymbolPair getExchangeSpecificStockSymbol(String symbol) {
          SymbolPair sa = internalSymbolToExternalSymbolPairMap_GS.get(symbol);
          return sa == null ? new SymbolPair(symbol, null) : sa;
      ARCA("C", false, '.', OrderTIF.GTD) {
        public String getStandardizedStockSymbol(String symbol, String suffix) {
          String x = externalSymbolPairToInternalSymbolMap_GS.get(new SymbolPair(symbol, suffix));
          return x == null ? symbol : x;
        public SymbolPair getExchangeSpecificStockSymbol(String symbol) {
          SymbolPair sa = internalSymbolToExternalSymbolPairMap_GS.get(symbol);
          return sa == null ? new SymbolPair(symbol, null) : sa;
      public abstract String getStandardizedStockSymbol(String symbol, String suffix);
      public abstract SymbolPair getExchangeSpecificStockSymbol(String symbol);
      private static Map<SymbolPair, String> externalSymbolPairToInternalSymbolMap_GS = new HashMap<SymbolPair, String>();
      private static Map<SymbolPair, String> externalSymbolPairToInternalSymbolMap_ARCA = new HashMap<SymbolPair, String>();
      private static Map<String, SymbolPair> internalSymbolToExternalSymbolPairMap_GS = new HashMap<String, SymbolPair>();
      private static Map<String, SymbolPair> internalSymbolToExternalSymbolPairMap_ARCA = new HashMap<String, SymbolPair>();
      private static Object[] toArrayOutputArray = null;
      static {
        String SQL = null;
        try {
          Connection c = MySQL.connectToMySQL("xxx", "xxx", "xxx", "xxx");
          SQL = "SELECT symbol, ARCASYMBOL, INETSYMBOL, ARCASYMBOLSUFFIX, INETSYMBOLSUFFIX from oms.tblsymbolnew";
          Statement stmt = c.createStatement();
          ResultSet rs = stmt.executeQuery(SQL);
          while (rs.next()) {
            String symbol = rs.getString("symbol");
            if (rs.getString("ARCASYMBOL") != null) {
              if (!symbol.equals(rs.getString("ARCASYMBOL")) || rs.getString("ARCASYMBOLSUFFIX") != null) {
                String suffix = rs.getString("ARCASYMBOLSUFFIX");
                SymbolPair sp = new SymbolPair(rs.getString("ARCASYMBOL"), suffix);
                internalSymbolToExternalSymbolPairMap_ARCA.put(symbol, sp);
                externalSymbolPairToInternalSymbolMap_ARCA.put(sp, symbol);
        } catch (Exception e) {
          System.out.println(SQL);
          e.printStackTrace();
          System.exit(0);
      static {
        populateSymbolToDestination();
      static Logger logger = Logger.getLogger(OrderExchange.class);
      private static HashMap<String, OrderExchange> symbolToDestination = new HashMap<String, OrderExchange>();
      private final String tag100;
      private final boolean usesSymbolSuffixTag;
      private final char symbolSuffixSeparator;
      private final OrderTIF defaultTif;
      private static final String soh = new String(new char[] { '\u0001' });
      private OrderExchange(String tag100, boolean usesSymbolSuffixTag, char symbolSuffixSeparator, OrderTIF defaultTif) {
        this.tag100 = tag100;
        this.defaultTif = defaultTif;
        this.usesSymbolSuffixTag = usesSymbolSuffixTag;
        this.symbolSuffixSeparator = symbolSuffixSeparator;
      public OrderTIF getDefaultTif() {
        return defaultTif;
      public String getTag100() {
        return tag100;
      public char getSymbolSuffixSeparator() {
        return symbolSuffixSeparator;
      public static OrderExchange getOrderExchangeByExchangeName(String name) {
        for (OrderExchange d : OrderExchange.values()) {
          if (d.toString().equalsIgnoreCase(name.trim())) {
            return d;
        return null;
      Thanks,
    Andrew

    Hi Andrew
    The only way to serialize object, so that they can be used by other languages than Java is to use the Portable Object Format.
    The implementation of this requires you to implement the PortableObject interface in Java. PortableObject defines two methods
    public void readExternal(PofReader reader);
    public void writeExternal(PofWriter writer);
    Also you need to add a POF config file that ties the type to a type id.
    In C++ each type needs two template methods implemented to seralize and deserialize. But first it needs to register the data type with the same type id as for the Java type using the COH_REGISTER_MANAGED_CLASS macro.
    Secondly analogs to Java implement the serializer stubs
    template<> void serialize<Type>(PofWriter::Handle hOut, const Type& type);
    template<> Type deserialize<Type>(PofReader::Handle hIn);
    For C# your serializable types need to implement IPortableObject, with it's two methods:
    void IPortableObject.ReadExternal(IPofReader reader);
    void IPortableObject.WriteExternal(IPofWriter writer);
    Similar to Java C# uses a POF configuration file, the same type id should bind to the corresponding C# type.
    For more information see
    POF Configuation: http://coherence.oracle.com/display/COH34UG/POF+User+Type+Configuration+Elements
    C++: http://coherence.oracle.com/display/COH34UG/Integrating+user+data+types
    C#: http://wiki.tangosol.com/display/COHNET33/Configuration+and+Usage#ConfigurationandUsage-PofContext
    Hope that helps!
    /Charlie
    Edited by: Charlie Helin on Apr 30, 2009 12:31 PM

  • Best way to configure a network comprising WLSE and many APs ?

    Hi the Cisco NetPro community,
    I would like to have a discussion with you on the best way to configure a network containing a WLSE and a large amount of Access Points.
    The network I want to configure comprises some subnetnorks, each comprising about 10 access points (with some advanced settings for security). It might be a quite long and boring process to set the configuration for all those, so I am looking for the quickest and easiest solution to do so.
    First of all, the configuration of IP addresses have to be done on each Access Point after unpacking it. The configuration of my network comprises 1 WDS active AP, 1 WDS backup AP and the rest of infrastructure APs, that for each developement site.
    I thought about several solutions :
    - 1st solution could be to apply a configuration file (i.e. load the config.txt file) to each AP manually, changing some values (IP, local radius...).
    But problem is that passwords can't be changed with text editor because of the passwords written in "hash".
    - 2nd solution could be to configure each AP (after IP is set) using its web interface.
    No more problem for hash written passwords, but this method is quite boring when surfing on menu pages of the AP web interface...
    - 3rd solution, which could appear as the best solution, is to create a template on the WLSE, and to apply it to all APs.
    No more boring connection to each AP, but problem are : we need to create as many templates as APs (or change some parameters each time), and we still need to set parameters directly to APs before (SNMP, SSH, WDS configuration...), in order the WLSE to manage the APs.
    So, what do you think could be the best solution in order to deploy such a network with many APs ?
    How is it possible to avoid (so far as we can) the configuration of APs one by one ?
    Thanks a lot in advance for your consideration and your ideas !
    Alexis.

    Well for one of my clients that had over 60 sites, we actually created a couple of templates. We created a basic template and a template for each site. You can have the ap's obtain the configuration from the WLSE, but you need to configure a DHCP option. My client did mac address reservations, but of course you need the mac address first. I guess you can also let the ap get an address and change it later. They tried doing different things, first let the ap obtain a default config and then pushing out the configuration for that site.
    As for the hash, you can set the password in ascii... when you do a show run, then of cours it will be hash'd.
    http://www.cisco.com/en/US/docs/wireless/wlse/2.12/user/guide/deploywz.html#wp1936755

  • Best way to move apps, data, mail, etc. to new imac?

    I am wondering what the best way is to move stuff from my g4 pb to new imac. Both are osx 10.4.4. I have an external fw drive with a clone of my pb drive on one partition. I don't want to copy everything to the new mac. I have a .mac acct.
    Here are my questions:
    Can I move my mail, contacts, ical, and transmit (ftp) settings just by syncing with .mac? All of those are backed up daily to idisk. If i sync to the new mac, will all mail messages be copied?
    What about apps? Is it best to just drag them over from the pb or fw drive, or to re-install from disks and/or install/dmg files when possible?
    Firefox... I have a bunch of extensions installed and config is highly customized. If I just drag it over to new imac, will config and extensions still work?
    Any idea of the easiest way to keep files synced between 2 macs? All files are in my user folder.
    I use SuperDuper for backup from pb to fw drive, but I am hoping for a more direct/automatic way for the file sync. Size and number of files are too large for .mac. Has anyone tried SuperDuper on new mac yet?
    One more question... user accounts. I have 3. One main (admin), one guest, and one emergency admin. There is nothing to transfer from any acct. besides main. Is it better to transfer the accts over from pb, or to set up identical accts on the new mac?
    Thanks!
    12" pb g4 1.33/768, 20" imac cd 2.0/1G   Mac OS X (10.4.4)  

    Perhaps the easiest way to transfer information from one machine to the other is using Migration Assistant found in your application/utilities menu. Have a look at . You'll need a firewire cable to connect the two machines.
    Since you are migrating applications etc. to a new System architecture, I suggest being cautious about what applications you transfer. Some apps. (non-Universal) may not be Rosetta compatible. Have a look at the current compatibility list from Apple for Universal Apps. Also, here's about Rosetta.
    Rather than syncing the .Mac account to your new system, it may be safer to simply move this information via MA. Not sure about how the syncing works between the 2 machines going forward. Someone else will have to answer that question.
    "I use SuperDuper for backup from pb to fw drive, but I am hoping for a more direct/automatic way for the file sync. Size and number of files are too large for .mac. Has anyone tried SuperDuper on new mac yet?"
    Not ready for prime time yet. Have a look at this thread on SuperDuper's forums.
    User accounts can be set up on your new machine via System Preferences>Accounts. No sense transfering them if you have no personal information in the User account.
    Firefox is not yet Universal, although I understand it's coming soon. I guess the current version runs OK in Rosetta. The transfer of extensions etc. ought to be handled easily via Migration Assistant.
    Lastly, make sure your G4 is fully backed up via SuperDuper (bootable clone) before starting Migration Assistant.
    Good Luck
    iMac G5 Rev C 20" 2.5gb RAM 250 gb HD/iBook G4 1.33 ghz 1.5gb RAM 40 gb HD   Mac OS X (10.4.4)   LaCie 160gb d2 HD

  • Best way to manage all your WSAs ?

    Hi Forum,
    I run a setup with central M660 for policy management of 10 WSA.
    But when it comes to configuration management, like each box own running config, and PAC files, AD integration etc
    What is the best way to dp this, and are there any central tools available to accomplish this ?
    I have a solarwinds runnning that might be used to some extend ?
    As a side effect of management of WSA I guess I also will be needing RBAC and I see there are support for RADIUS.
    (External Authentication; Admin, Oper, RO-Oper & Guest)
    As I run ACS 5.1 aswell, how can this be used to RBAC on WSAs ?
    And again, can I push this config centrally via M660 ?

    We use the M380 for Policy Management but manage the configuration items you mentioned individually. I am not aware of a system that can centralize those configuration functions. I would ask your local sales rep.

  • Best way to configure patching of multiple groups?

    Howdy,
    We're looking to start using Config Manager 2012 R2 to handle patching of our servers.  I'm looking for any advice or opinions as to how to set everything up to run in the best way.
    We have 3 primary groups that we want to update
    QA/Dev Pilot group (~20 machines)
    QA/Dev All group (~100 machines including the 20 above)
    Production (~90 machines)
    We want to do patching on the weekend and do each group on consecutive weekends.
    Pilot - 3rd saturday
    QA/Dev - 4th Saturday
    Prod - 1st Saturday
    I'm just looking for the best way to set things up in SCCM to handle this.
    We were going to have an Automatic Deployment Rule for each group and schedule them that way.
    Then I saw something that said to just have 1 ADR that points to a collection that contains all those group collections and just set maintenance windows on each group collection to handle the scheduling.
    I'm just trying to figure out if there's a "best" way to do this or if there are multiple ways that all lead to the same end result and it really doesn't matter which one we choose or what.
    We're all still pretty new to SCCM so any advice would be much appreciated.
    Thanks.

    I don't like the idea of using ADRs for handling all the patching, as it gets out of your control. That being said, to achieve something like that you need to run the ADRs on the same time every month, otherwise the deployments will contain different updates.
    So, you can have to play with the deadlines to get your results with the different Saturdays.
    Also, to make sure patches don't get installed during the week, by accident, you can use maintenance windows. For more information see:
    http://technet.microsoft.com/en-us/library/hh508762.aspx
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude
    We only update the patch list the day after patch Tuesday so all the ADRs should have the same patches.  But that brings up the question I asked about having multiple ADRs scheduled for consecutive Saturdays or have 1 ADR with each collection to have
    a maintenance window to set the Saturday it goes.  I guess having multiple ADRs that run at the same time but with different deadlines would be similar to just having 1 and using the maintenance windows.  But it might be better so that if the deadline
    hits but it doesn't patch for some reason a maintenance window would also keep it from going during the day...
    So maybe multiple ADRs that run on the 3rd Saturday.  Each one with a different deadline on a different saturday.  And then each collection would also have a maintenance window in case it didn't install at the deadline for some reason?  
    hmm...

  • Best way to copy ALL DATA from song to song?

    Hello everyone,
    I'm in the middle of scoring a film and have ran into what seems to be a corrupted song, and want to attempt to transfer all data from this song into a new file.
    I'm running multiple global tracks with changes in tempos and time sigs all over, multiple audio tracks, and various Virtual instruments. I also have an extensive environment in which I use instrument tracks to access my exs instruments, as well as multi-instrument tracks for kontakt 2. Of course I have this saved as a template, but as you know as projects progress, the current song rarely looks like your autoload. So simply copying and pasting will not work, as I have many new instruments and audio files in different places in my arrange page. Thus when I try to copy and paste, logic sees that the current tracks are not "correct" and asks to "create new instruments" which seems to randomly throw all the instruments to the bottom of my arrange page, making it all out of whack!
    With some help from the audio config window and some manual editing of the arrange page, I was able to get my song pretty much back to normal, but for future reference, what is the best way to transfer all of my data to another song, while still keeping the look and functionality of the song intact?
    Thanks in advance for your help! If it helps to see what I'm talking about, I'm willing to post my song file for reference, just ask.

    I'd LOVE to somehow avoid #4 if possible. As I stated
    above, my projects always change from the templates
    due to extra audio tracks, audio instruments, etc, so
    a simple "copy and paste" does not seem to work.
    Is there something I'm missing?
    Yup, I was missing something really simple. If you choose NOT to copy instruments, it will ask you to keep the instrument or create a new track. If you keep the instrument (since my instruments names remained the same as the template) logic will connect the tracks with the instruments and only put the additional tracks (i.e. extra audio tracks etc) on the bottom. WAY COOL!
    Now my process is a little easier, however if you have any other tips so share please feel free to post.

  • The best way to change BO layer (in B2B Java class)

    Hi, mate:
    One question, what is the best way to modify a business object.
    For example: in B2B java, there is a class:
    BidMgrImpl.java, however I want to modify some code for method Bid(),and for the action flow and JSP, I do not need to modify them.
    I am looking for the simpliest way to implement this. instead of modify from JSP, to config.xml, action, and BidMgrImpl.java?
    Cheers,
    Eric

    Hi, Prashil:
    Thanks for your reply.
    And a basic question, I am using NWDS now, and have none exprience of the tool.
    I followed one PDF file(How_To_Create_Custom_App_ECO5200.pdf) to create my application(b2b_xyz), now I have three applications:
    - LPJ_CRM_Db2b_xyzsap.com
    - LPJ_CRM_Dcrmhomeshrext~sap.com
    - LPJ_CRM_Dcrmisawebb2b~sap.com (this contains all JSP files under webcontent folder).
    my question is if I need extend one class, where and how can I create:
    for example in SAP application:
    com.sap.isa.auction.businessobject.b2x.BidMgrImpl
    In this class, if I need extend, what's gonna be the classpath, and where I can create this subclass.
    Cheers,
    Eric

Maybe you are looking for

  • Javascript is enabled but it is not working

    Hi! I am having a problem with websites that has dropdown menus. One website in particular is elegantthemes.com There is a feature there that when downloading a theme, a pop-up will appear and it will let you choose which to download. I doesn't appea

  • Why can't I create a POP account?

    I wanted to change an IMAP mail account to POP. I thought I knew how to do it. I saved all my messages to different folders on my Mac. I then deleted the IMAP account. I went to Accounts in Preferences and clicked to create a new account. I filled in

  • Problems with concatenated primary key

    Hi all, I am trying to create cache groups of single tables. When I create a cache group (using the browser based cache admin tool) for a table that has a single row primary key, there are no problems. After creating the cache group I can go to TTISQ

  • MacBook Pro Retina Display

    My brand new mac book is extremely hot while I am watching anything on it. I am not sure if this is normal or not, does anyone have any advice or insight? Thanks!

  • IOS: How can I tell the current status of the flash - ON or OFF?

    I want to be able to tell whether the Flash LED is ON or OFF. Specifically, during the capture of an image with Flash, the LED stays On for some time then switches OFF (Preflash), then turns On again (main flash) just before the image is captured. I