Z index handling

Hi to all!
The purpose of my application: monitoring of the motion control system.
Monitored equipment:
1.       Motion controller + motor.
        http://www.yetmotion.com/YetIsrael/Products.asp?Currentcategory_id=44&Currenttat_category_id=70
        http://www.yetmotion.com/YetIsrael/ProductDetails.asp?ID=15
 Monitoring equipment:
1.       NI PXI-6602
2.       Quadrature encoder with Z – index. (The encoder is connected to the shaft of the motor).
Right now the encoder not defined. Instead of the encoder I’m using PG-out of my motion controller. It works exactly as usual quadrature encoder.  
The application is written in labview 8.2.1 and right now it is nothing more than "Meas Angular Position-Buffered-Cont-Ext Clk.vi" from labview examples. 
I need an application that performs counter reset not each z-index.
My motion controller implements command such as SLIDE(velocity), Home_C and so on.
Command sequence (for my motion controller):
Slide 1000 – my motor revolves with 1000rpm (The duration of the movement is not configured).
After 10 min the position is 312445878 ticks (The number is for exemplifying).
In my encoder Z-index comes every 16384 ticks
Home_C 100 – perform homing on C-pulse. (Go to the Z-index and reset the actual position of my controller).
In this example Z-index happened 312445878/16384=19070 times, and only when I send Home_C command, on the next Z-index it will reset the position value.
So, what I want is that PXI-6602 will monitor the actual position value according to the circumstances.
Thanks in advance!
Izia
YET
www.yetmotion.com

Hi Izia,
Technically, a position-capture task is an edge-counting task with a little more configuration done for you.  In this case, you are probably best to use one of the Angular Position examples.  The example that you specified should work fine.  I think that I originally misunderstood the question.
In order to reset the z index count, you will need to configure the z index using the DAQmx Create Channel function.  From the LabVIEW help:
z index enable specifies whether to use Z indexing for the channel.
z index value specifies in units the value to which to reset the measurement when signal Z is high and signal A and signal B are at the states you specify with z index phase.
z index phase specifies the states at which signal A and signal B must be while signal Z is high for NI-DAQmx to reset the measurement. If signal Z is never high while signal A and signal B are high, for example, you must choose a phase other than A High B High.
When signal Z transitions to high and how long it stays high varies from encoder to encoder. Refer to the documentation for the encoder to determine the timing of signal Z with respect to signal A and signal B.
A High B High (10040)
Reset the measurement when signal A and signal B are high.
A High B Low (10041)
Reset the measurement when signal A is high and signal B is low.
A Low B High (10042)
Reset the measurement when signal A is low and signal B high.
A Low B Low (10043)
Reset the measurement when signal A and signal B are low.
Lastly, I recommend that you look at this knowledge base article for more information about resetting the z index.
I think that the example you are using should give the desired results if configured correctly.  Please let us know if you have further questions.
Thanks,
Luke
Applications Engineer
National Instruments

Similar Messages

  • [svn:osmf:] 13754: Restore ability for HTTP streaming index handler to send /receive arbitrary context associated with an index, per Matthew's feedback .

    Revision: 13754
    Revision: 13754
    Author:   [email protected]
    Date:     2010-01-25 10:56:31 -0800 (Mon, 25 Jan 2010)
    Log Message:
    Restore ability for HTTP streaming index handler to send/receive arbitrary context associated with an index, per Matthew's feedback.
    Modified Paths:
        osmf/trunk/framework/OSMF/.flexLibProperties
        osmf/trunk/framework/OSMF/org/osmf/events/HTTPStreamingIndexHandlerEvent.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/HTTPNetStream.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/HTTPStreamingIndexHandlerBase.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/f4f/HTTPStreamingF4FIndexHandler.as
    Removed Paths:
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/URLLoaderWithContext.as

    Remember that Arch Arm is a different distribution, but we try to bend the rules and provide limited support for them.  This may or may not be unique to Arch Arm, so you might try asking on their forums as well.

  • PSE 5 index / catalog duplicate filenames

    How does the Photoshop Elements 5 catalog / indexer handle duplicate filenames? Or not? If you just upload your camera's native filenames, you're going to have a lot of _mg_3444.jpg, _mg_3445.jpg, etcetera, files over time. Is this why when you add photos to the catalog, PSE 5 reports back that new file wasn't added to the catalog because the filename alredy exists?

    >Is this why when you add photos to the catalog, PSE 5 reports back that new file wasn't added to the catalog because the filename already exists?
    I do not have that message in front of me but I think it says the File already exists rather than the Filename already exists.
    The criteria that the Organizer uses to determine whether a photo file is a duplicate of a photo already in the catalog uses more factors than just file name - factors such as size, date and time.
    Yes, I agree that there will be different photos that have the same file names. However, the use of the additional criteria should usually prevent different photos with the same file name from being rejected for import.
    Usually when I see that message, it is because I have previously imported that photo. Perhaps I had the same photo in multiple folders on my hard drive or I already had a photo and then received it again from another source. Or I am importing from a camera card that still contains photos that were previously imported.
    If I get a rejected as a duplicate photo and I don't immediately know why, then I may use the Organizer Find menu and do a Find by Filename to see all photo files with the name that was just rejected. Then I can look at all the photos with that file name to find which one duplicates my current attempted import.

  • Remove index references from selection

    Hello can anyone offer me the vba code to delete index handles from a selected range? See snapshot of handles below in non-print chars view. The index handles are always {XE "text"}
    AB

    Hi mk20061. You can suppress the display of the Index AND TOC
    tabs but not the index tab on it's own. Go to your window's
    properties and deselect the "TOC & Index"option in the Tripane
    Tabs & Windows dialog section. The other alternative is to
    delete all the index keywords from the project as the index tab is
    only displayed if there are keywords to display.

  • Missing Index entries after Migrating Repository from 1.3.2 to 9i

    Hi,
    We have migrated our repository from 1.3.2 to 9i.
    Index entries were created for both the Primary keys and Foreign keys.The indexes were created by us and not by designer.
    After migration to 9i the index entries for primary and Foreign are missing in the design editor.When we generate a table we get an error because these index entries are missing.
    Does designer 9i support index entries?Or How are indexes handled by designer 9i?

    To clarify, your index entries were in the database but NOT defined in Des 1.3.2 model? Because if they were in the model, then I'd look back at your intermediate migration definitions to make sure the indexes were still defined properly in there.
    In Des 2.1/6.0/6i/9i/10g, Indexes are a separate sub-element (SACs) for a Table Definition. They don't have to be associated with a key structure. And in newer database versions, a PK / UK has an implicit index constructed for it in the database, so Designer doesn't bother showing you the Index in the model. Just the PK or UK itself (again, these are also separate SACs).
    And it's not clear where you're getting the error. During the generate? Or during the instantiation into a database? Neither situation is a common fault.

  • FileAdapter write mode - Dynamic filename, write down simple text structure

    Hi,
    I tried to use the FileAdapter in a BPEL Process to write down and existing structure as String, comming from Database (CLOB), which has a defined structure.
    My first problem is, that I assign an output filename through the output message header variable, but it does not work. Instead always the name from the Filename convention is taken (wizard like po_%SEQ%.txt).
    My second problem is, that the FileAdapater destroyes the format during the write procedure. E.g.: The String structure is as follows:
    MKKOPF EF585773 07.05.2009 1 XXX Spielapparate u. Restaurant- betriebsgmbH 40031356 Brehmstraße 21 1110 Wien 900000585773 EUR ML
    MKMAIL [email protected]
    MKEINZ EF4428643 28.06.2009 Test Card Casino XXX Gesamtausgabe Textteil Allgemein Großanzeige 4C 1 ST 1 489,00 489,00 GES
    MKPOS. EF4428644 26.07.2009PRVB 948,66 / 10 / +5% WA
    a.s.o, . So you can see, its quite a simple format, which has to stay like that!
    First I tried to use the opaque mode for the fileoutput format, but I was stopped, because of the '/' characters in the source data. So I used the native builder to create a xml schema (I used deliminated file, multiple records, deliminated by white spaces, tabs and spaces). To identify the multiple records, I have chosen the end of line (eol). Except EOF and EOL there is no other choice, like CRLF,\r,\n!
    However, testing my native format, result in writting a file with containing a single line with the original data. So the original line breaks are gone, and therefore the system who should process it further gets an parsing error!
    The simple question is, how can I simple write down string data (containing German special chars, like Öäü --> ISO8859-1 and also XML reserved values like /) through the file adapter without destroying its structure?
    Second, how can I pass dynamically the filename, as using the output header values does not work?
    I hope some specialists are out there.
    Thanks & BR,
    Peter

    Hi James,
    here what I am doing.
    I read some records from the database throught the database adapter. The returned records representing strings, which should be written, line by line separated by 0xA (\n) to a single file.
    Also for the file write we use a Oracle SOA Adapter --> the FileAdapter.
    We discussed quiet some time about opaque schema and native builder. So on my understanding only the native builder version works. Otherwise I get an error, because of the contained slashes in the string record.
    What I am searching for is an easy way for aggregating this database row strings and append it with 0xA (\n).
    With XPath functions, like create-deliminated-string(node, '#xA') I had no success. Adhering to XML the signs #xA should represent 0xA, but it does not work.
    Also in using the encodeLineTerminators 
 is not working.
    So I had really write a long bpel construct, combined with a Java Code Emeeding to do such a simple thing:
    Here is the interessting part (hiden the while loop, index handling and final fileadapter write call details):
    <while name="While_1"
    condition="($VariableSapBillingJobs > 0) and ($VariableIndex &lt;= $VariableSapBillingJobs)">
    <sequence name="Sequence_4">
    <sequence name="Sequence_4">
    <assign name="Assign">
    <copy>
    <from expression="concat(bpws:getVariableData('VariableBuffer'), bpws:getVariableData('InvokeSapDataSelection_eBillingSapDB4Select_OutputVariable','eBillingSapDB4SelectOutputCollection','/ns8:eBillingSapDB4SelectOutputCollection/ns8:eBillingSapDB4SelectOutput[$VariableIndex]/ns8:LINE'))"/>
    <to variable="VariableBuffer"/>
    </copy>
    </assign>
    *<bpelx:exec name="JavaAppendCRLF" language="java"*
    version="1.5">
    <![CDATA[String buffer = (String)getVariableData("VariableBuffer");    
    buffer += "\n";     
    setVariableData("VariableBuffer", buffer);]]>
    </bpelx:exec>
    <assign name="AggregateStructure">
    <copy>
    <from expression="$VariableIndex + 1"/>
    <to variable="VariableIndex"/>
    </copy>
    </assign>
    </sequence>
    </sequence>
    </while>
    There must be a simpler way like that!
    Do you or somebody else a simpler way? Doing it with the FileAdapter native builder, or at least with a XPath function?
    Furthermore my next homework is, the inverse operation. I will read a file, this time terminated with 0xD 0xA, and must translate it to XML to use it further in my bpel process.
    Also here I have not the idea, where the fileadapter supports me.
    Thanks & BR,
    Peter

  • Can't remove a node from a tree

    I am using the custom tree dataDescriptor provided in Flex live
    doc. It works for creating the tree and add notes, however when I
    try to remove a node from the tree it cant work. Does anyone have
    any idea?
    This is the code for MyCustomeTreeDataDescriptor.as
    package
    import mx.collections.ArrayCollection;
    import mx.collections.CursorBookmark;
    import mx.collections.ICollectionView;
    import mx.collections.IViewCursor;
    import mx.events.CollectionEvent;
    import mx.events.CollectionEventKind;
    import mx.controls.treeClasses.*;
    public class MyCustomTreeDataDescriptor implements
    ITreeDataDescriptor
    // The getChildren method requires the node to be an Object
    // with a children field.
    // If the field contains an ArrayCollection, it returns the
    field
    // Otherwise, it wraps the field in an ArrayCollection.
    public function getChildren(node:Object,
    model:Object=null):ICollectionView
    try
    if (node is Object) {
    if(node.children is ArrayCollection){
    return node.children;
    }else{
    return new ArrayCollection(node.children);
    catch (e:Error) {
    trace("[Descriptor] exception checking for getChildren");
    return null;
    // The isBranch method simply returns true if the node is an
    // Object with a children field.
    // It does not support empty branches, but does support null
    children
    // fields.
    public function isBranch(node:Object,
    model:Object=null):Boolean {
    try {
    if (node is Object) {
    if (node.children != null) {
    return true;
    catch (e:Error) {
    trace("[Descriptor] exception checking for isBranch");
    return false;
    // The hasChildren method Returns true if the node actually
    has children.
    public function hasChildren(node:Object,
    model:Object=null):Boolean {
    if (node == null)
    return false;
    var children:ICollectionView = getChildren(node, model);
    try {
    if (children.length > 0)
    return true;
    catch (e:Error) {
    return false;
    // The getData method simply returns the node as an Object.
    public function getData(node:Object,
    model:Object=null):Object {
    try {
    return node;
    catch (e:Error) {
    return null;
    // The addChildAt method does the following:
    // If the parent parameter is null or undefined, inserts
    // the child parameter as the first child of the model
    parameter.
    // If the parent parameter is an Object and has a children
    field,
    // adds the child parameter to it at the index parameter
    location.
    // It does not add a child to a terminal node if it does not
    have
    // a children field.
    public function addChildAt(parent:Object, child:Object,
    index:int,
    model:Object=null):Boolean {
    var event:CollectionEvent = new
    CollectionEvent(CollectionEvent.COLLECTION_CHANGE);
    event.kind = CollectionEventKind.ADD;
    event.items = [child];
    event.location = index;
    if (!parent) {
    var iterator:IViewCursor = model.createCursor();
    iterator.seek(CursorBookmark.FIRST, index);
    iterator.insert(child);
    else if (parent is Object) {
    if (parent.children != null) {
    if(parent.children is ArrayCollection) {
    parent.children.addItemAt(child, index);
    if (model){
    model.dispatchEvent(event);
    model.itemUpdated(parent);
    return true;
    else {
    parent.children.splice(index, 0, child);
    if (model)
    model.dispatchEvent(event);
    return true;
    return false;
    // The removeChildAt method does the following:
    // If the parent parameter is null or undefined, removes
    // the child at the specified index in the model.
    // If the parent parameter is an Object and has a children
    field,
    // removes the child at the index parameter location in the
    parent.
    public function removeChildAt(parent:Object, child:Object,
    index:int, model:Object=null):Boolean
    var event:CollectionEvent = new
    CollectionEvent(CollectionEvent.COLLECTION_CHANGE);
    event.kind = CollectionEventKind.REMOVE;
    event.items = [child];
    event.location = index;
    //handle top level where there is no parent
    if (!parent)
    var iterator:IViewCursor = model.createCursor();
    iterator.seek(CursorBookmark.FIRST, index);
    iterator.remove();
    if (model)
    model.dispatchEvent(event);
    return true;
    else if (parent is Object)
    if (parent.children != undefined)
    parent.children.splice(index, 1);
    if (model)
    model.dispatchEvent(event);
    return true;
    return false;
    This is my tree definition:
    <mx:Tree width="143" top="0" bottom="0" left="0"
    height="100%"
    id="publicCaseTree"
    dataDescriptor="{new MyCustomTreeDataDescriptor()}"
    dataProvider="{ac}"
    defaultLeafIcon="@Embed('assets/caseIcon.png')"
    change="publicTreeChanged(event)"
    dragEnabled="true"
    dragMoveEnabled="false"/>
    This is how I remove the selected node from the tree. When
    Delete button is clicked, the doDeleteCase function is
    exectuted.
    public function publicTreeChanged(event:Event):void {
    selectedNode =
    publicCaseTree.dataDescriptor.getData(Tree(event.target).selectedItem,
    ac);
    public function doDeleteCase(event:Event):void{
    publicCaseTree.dataDescriptor.removeChildAt(publicCaseTree.firstVisibleItem,
    selectedNode, 0, ac);
    Any help would be appreciated.Thanks.

    Finally I removed nodes from tree, but not sure I did in the
    right way. Anybody encounter the same problem, please
    discuss.

  • Oracle Text and Workspace Manager

    Has anybody incoroporated Workspace Manager and Oracle text together. How is the Oracle Text index handled? Can users in different workspaces submit documents, have them indexed and be the only ones to see those documents?

    Hi,
    I do not have much experience with Oracle text, and am unsure exactly how it works. As such, I would suggest to file a TAR requesting this information.
    Regards,
    Ben

  • Querying objects not in the NamedCache

    The wiki topic on querying (http://wiki.tangosol.com/display/COH32UG/Querying+the+Cache ) points out that a query will "apply only to currently cached data".
         This seems fairly logical because it seems unreasonable to expect the cache to hold onto information that it has already evicted.
         If you were to design a DAO layer (following http://wiki.tangosol.com/display/COH32UG/Managing+an+Object+Model ) using the first of the following architectures:
         1. Direct Cache access:
         App <---> NamedCache <---> CacheStore <---> DB
         2. Direct Cache and DB-DAO access:
         App
         |
         CacheAwareDAO <---> CacheStore <---> DB
         |
         NamedCache
         |
         CacheStore
         |
         DB
         you would then have a situation where you would not be able to query evicted data.
         So by using the 2nd strategy I assume you would probably always want to bypass the cache for all queries other than by primary key, to ensure that you are always querying the entire persistent population.
         This seems a little coarse grained and also reduces the utility of the Coherence cache (unless the bulk of your queries are by primary key).
         Can anybody tell me if my assumption is wrong and if there are any usage strategies the mitigate this aspect?
         Thx,
         Ben

    Hi Rob,     >
         > Why would you need 2 separate caches?
         the first cache would have eviction policy, and caches values, but does not have indexes
         the second would not have eviction, does not store data, but has index updates on changes.
         This way you have a fully indexed but not stored data-set, similarly to the difference between stored and indexed attributes Lucene.
         > Why not just
         > maintain a index within each cache so that every
         > entry causes the index to get updated inline (i.e.
         > synchronously within the call putting the data into
         > the cache)?
         >
         You cannot manually maintain an index, because that is not a configurable extension point (it is not documented how an index should be updated manually). You have to rely on Coherence to do it for you upon changes to entries in the owned partitions.
         And since Coherence code does remove index references to evicted or removed data, therefore the index would not know about the non-cached data.
         Or did I misunderstood on how you imagine the indexes to be maintained? Did you envision an index separate from what Coherence has?
         > (You may have to change Coherence to do this.....)
         Changing Coherence was exactly what I was trying to avoid. I tried to come up with things within the specified extension points, and the allowed things, although it might be possible that I still did not manage to remain within the allowed set of operations.
         Of course, if changing Coherence is allowed, allowing an option of filtering index changes to non-eviction events is probably the optimal solution.
         > And I don't think that the write-behind issue would
         > be a problem, as the current state cache of the cache
         > (and it's corresponding index) reflects the future
         > state of the backing store (which accordingly to
         > Coherence's resilience guarantee will definitely
         > occur).
         >
         The index on the second cache in the write-behind scenario would be out-of-synch only if the second cache is updated by invocations to the cache-store of the first cache. If it is updated upon changes to the backing map, then it won't. Obviously if you don't have 2 caches, but only one, it cannot be out-of-synch.
         > So you would have a situation where cache evictions
         > occur regularly but the index just overflows to disk
         > in such a fashion that relevant portions of it can be
         > recalled in an intelligent fashion, leveraging some
         > locality of reference for example.
         >
         I don't really see, how this could be done. AFAIK, all the indexes Coherence has are maintained in memory and does not overflow to disk, but I may be wrong on this, but again, I may have misunderstood what you refer on index handling.
         > a) you leverage locality of reference by using as
         > much keyed data access as possible
         > b) have Coherence do the through-reading
         > c) use database DAO for range querying
         > d) if you were to use Hibernate for (c), you might be
         > able to double dip by using Coherence as an L2 cache.
         > (I don't know if this unecessarily duplicates cached
         > data....)
         >
         > Any thoughts on this?
         a: if you know ids on your own, then this is the optimal solution, provided cache hit rates can be driven high enough. if you have to query for ids, the latency might be too high.
         b: read-through can become suboptimal, since AFAIK, currently the cache store reads all rows one by one, only read-ahead uses loadAll, but I may be wrong on this. Loading from database can be optimized for multiple id loading as well, to be faster than the same via cache store. So it is very important that the cache hit rate be very high for performance-relevant data in case of read-through.
         c: use database dao for complex querying, possibly almost anything more complex than straight top-down queries. make performance tests for both solutions, try to rely on partition affinity, and try to come up with data structures that help with making indexes which can be queried with as few queries as possible, and with not too high index access count.
         d: you cannot query by Coherence on Hibernate second-level cache, as Hibernate second-level caches do not contain structured data, but contain byte[][]s or byte[], holding the column values serialized to it (separately or the same byte[], I don't remember which).
         Best regards,
         Robert

  • [svn:osmf:] 13966: 1. Integrate code changes from Matthew to remove timeBias and add support for subclip

    Revision: 13966
    Revision: 13966
    Author:   [email protected]
    Date:     2010-02-03 15:04:44 -0800 (Wed, 03 Feb 2010)
    Log Message:
    1. Integrate code changes from Matthew to remove timeBias and add support for subclip
    2. A minor bug fix with the index handler
    Modified Paths:
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/HTTPNetStream.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/HTTPStreamingState.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/f4f/HTTPStreamingF4FIndexHandler.as

    Maybe you try posting in one of the Enterprise & Remote Computing forums:
    http://forum.java.sun.com/category.jspa?categoryID=14
    And please use the forum CODE tags for formatted readable display of your code on the board.
    http://forum.java.sun.com/post!reply.jspa?messageID=9989322#

  • Gen. List of a gen. class with a gen. type of a Comparable ? super C

    Hi,
    I'm facing a (little ;-)) problem getting my code free of unchecked warnings.
    Please consider the following code:
    There are two generic classes:
    public class AppDbObjectCache<T extends AppDbObject> {
      private List<AppDbObjectCacheIndex<T,?>> indexes;
      // select an object via cache by unique key
      public <C extends Comparable<? super C>> T select(int handle,  C key)  {
        // unchecked warning (won't compile without cast)
        AppDbObjectCacheIndex<T,C> index = (AppDbObjectCacheIndex<T,C>)indexes.get(handle);
    }and:
    public class AppDbObjectCacheIndex<T extends AppDbObject, C extends Comparable<? super C>> {
    }My problem is the wildcard in private List<AppDbObjectCacheIndex<T,?>> indexes
    because I need a cast whenever I get an element out of the List and
    that cast triggers an unchecked warning.
    I tried:
    private List<AppDbObjectCacheIndex<T,? extends Comparable<?>>> indexes;but the compiler is right about the fact that a Comparable<?> is
    not the same as a C extends Comparable<? super C> and still
    insists on a cast.
    Then I tried:
    private List<AppDbObjectCacheIndex<T,? extends Comparable<? extends ?>>>which is nonsense, of course, but it shows the problem of having two
    different wildcards and I have no clue how to express that.
    To introduce the second wildcard, i.e. the 'C', I tried:
    private <C extends Comparable<? super C>> List<AppDbObjectCacheIndex<T,C> indexes;but the compiler complains about a wrong syntax, because declaring
    a generic type is only allowed for classes and methods and not for
    variable declarations.
    Is there a way to express in the List-declaration that the 2nd type
    variable of the List's objects is something that extends a Comparable of a superclass of itself?
    Or do I have to live with the casts as with Class.forName() ?
    Any idea?
    Thanx,
    Harald

    Ok.
    Here is an example stripped down to the very essentials.
    We're talking about a typesafe generic cache for persistable objects
    (stored in a database in real life, but that doesn't matter in this example).
    There are 5 classes:
    - the superclass for all such objects. These objects are uniquely
    identified by a Long integer, the so-called object-id.
    - the sample object class to be cached. Instances of this class are
    additionally unique by their name, which is a String.
    - the sample class showing how to setup and use the cache.
    - the generic cache with the ability for managing any number of unique indexes
    - the generic cache index
    So far, so good.
    The super class of all database objects:
    * database object
    public class BlaObject {
      private long id;        // unique object id
      public void setId(long id) {
        this.id = id;
      public long getId() {
        return id;
      public BlaObject getFromStorageById(long id)  {
        // whatever is necessary to get it from storage by id ...
        return this;
    }The sample object class extended by the unique attribute 'name':
    public class BlaExampleObject extends BlaObject {
      private String name;    // unique name
      public void setName(String name)  {
        this.name = name;
      public String getName() {
        return name;
      public BlaExampleObject getFromStorageById(long id)  {
        return (BlaExampleObject)super.getFromStorageById(id);
      public BlaExampleObject getFromStorageByName(String name)  {
        // whatever is necessary to get it from storage by name ...
        return this;
    }The example how to use the cache:
    public class BlaCacheExample {
      public BlaCacheExample()  {
        BlaObjectCache<BlaExampleObject> cache = new BlaObjectCache<BlaExampleObject>();
        // create first index by object-id
        int idHandle = cache.addIndex(new BlaObjectCacheIndex<BlaExampleObject,Long>() {
          public BlaExampleObject select(Long id) {
            return new BlaExampleObject().getFromStorageById(id);
          public Long extract(BlaExampleObject object) {
            return object.getId();
        // create second index by name
        int nameHandle = cache.addIndex(new BlaObjectCacheIndex<BlaExampleObject,String>() {
          public BlaExampleObject select(String name) {
            return new BlaExampleObject().getFromStorageByName(name);
          public String extract(BlaExampleObject object) {
            return object.getName();
        // get an object by id via cache
        BlaExampleObject object = cache.select(idHandle, new Long(1234));
        // get it via name
        object = cache.select(nameHandle, "test");
    }The cache:
    import java.util.*;
    * Object cache with multiple unique indexes.
    public class BlaObjectCache<T extends BlaObject> {
      // >>>>> WHAT IS THE CORRECT DECLARATION FOR '?'  <<<<< ??????????????????
      private List<BlaObjectCacheIndex<T,?>> indexes;   // the indexes
       * creates an instance of an BlaObjectCache.
      public BlaObjectCache() {
        indexes = new ArrayList<BlaObjectCacheIndex<T,?>>();
       * add a unique index
       * @param index is the index to add
       * @return the handle to the index
      public <C extends Comparable<? super C>> int addIndex(BlaObjectCacheIndex<T,C> index)  {
        indexes.add(index);
        return indexes.size() - 1;
       * Retrieve object via cache
       * @param handle is the index-handle
       * @param key is unique key
       * @return the object or null if no such object
      public <C extends Comparable<? super C>> T select(int handle, C key)  {
          BlaObjectCacheIndex<T,C> index = (BlaObjectCacheIndex<T,C>)indexes.get(handle);
          T obj = index.get(key);   // get object from cache
          if (obj == null)  {
            // not in cache: get it from storage
            obj = index.select(key);
            if (obj != null)  {
              // add object to all indexes
              for (BlaObjectCacheIndex<T,?> ndx: indexes)  {
                ndx.add(obj);
          return obj;
    }and the cache index:
    import java.util.*;
    * Generic Cache index.
    public abstract class BlaObjectCacheIndex<T extends BlaObject, C extends Comparable<? super C>> {
      private TreeMap<C, T> cacheMap;     // mapping of keys to BlaObjects
       * Create a new index.
      public BlaObjectCacheIndex()  {
        cacheMap = new TreeMap<C, T>();
       * Select a BlaObject by a unique key from storage.
       * @param key uniquely identifies the object
       * @return the selected object or null if it does not exist.
      abstract public T select(C key);
       * Extract the key object.
       * @param object is the object to extract the key from
       * @return the key
      abstract public C extract(T object);
       * get object from cache by key.
       * @param key is the Comparable that uniquely identifies the object
       * @return the object or null if not in cache
      public T get(C key)  {
        return cacheMap.get(key);
       * add an object to the index.
       * @param object is the object to append
       * @return true if added, false if object already in index
      public boolean add(T object) {
        return cacheMap.put(extract(object), object) == null;  
    }The BlaObjectCacheIndex is 100% generic and typesafe.
    The problem arises in BlaObjectCache because it contains a
    List of BlaObjectCacheIndexes with different <C extends Comparable<? super C>>.
    Hence, I cannot make C a generic type variable
    for BlaObjectCache. However, I can force type checking for 'C' in
    the method-declarations of BlaObjectCache.
    Unfortunately, there is no way to describe
    that this List contains such elements. At least, I couldn't figure out
    how, so far :-(
    My reasoning is: there is no nasty type-casting or whatever trick in BlaObjectCacheIndex.
    The type-constraints correspond to the real demands of the application.
    A generic type like <C extends Comparable<? super C>> is not rocket science ;-)
    I just want to learn how to get the List-declaration right.
    Can you explain how?
    Regards,
    Harald

  • [svn:osmf:] 13985: Phase 2 of the MBR refactoring.

    Revision: 13985
    Revision: 13985
    Author:   [email protected]
    Date:     2010-02-04 22:29:47 -0800 (Thu, 04 Feb 2010)
    Log Message:
    Phase 2 of the MBR refactoring.  Implement play2 on HTTPNetStream.  Necessitated modifying the index handler to expose the stream names via a notify event (though this approach may change).  Isolate HTTP setup code to HTTPStreamingNetLoader.  Refactor NetStreamPlayTrait to be completely agnostic to NetStream type.  Merge NetStreamDynamicStreamTrait and HTTPStreamingNetStreamDynamicStreamTrait.  Introduce temporary loaded context for F4MElement to avoid RTEs.
    The end result is that RTMP and HTTP stream switching now use the NetStreamSwitchingManager, rather than RTMP using the former and HTTP using internal switching logic.  Still need to determine whether to remove HTTPNetStream's switching logic, or keep there for other use cases.  Note that we still need to extract HTTPNetStream's switching logic into one or more switching rules (that's phase 3, still to come).  Doing so might involve modifying the MetricsProvider to take into account any variations in metrics between RTMP and HTTP.
    Modified Paths:
        osmf/trunk/framework/OSMF/.flexLibProperties
        osmf/trunk/framework/OSMF/org/osmf/events/HTTPStreamingIndexHandlerEvent.as
        osmf/trunk/framework/OSMF/org/osmf/manifest/F4MLoader.as
        osmf/trunk/framework/OSMF/org/osmf/net/NetStreamPlayTrait.as
        osmf/trunk/framework/OSMF/org/osmf/net/dynamicstreaming/NetStreamDynamicStreamTrait.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/HTTPNetStream.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/HTTPStreamingNetLoader.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/HTTPStreamingUtils.as
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/f4f/HTTPStreamingF4FIndexHandler.as
        osmf/trunk/framework/OSMF/org/osmf/video/VideoElement.as
    Added Paths:
        osmf/trunk/framework/OSMF/org/osmf/manifest/F4MLoadedContext.as
    Removed Paths:
        osmf/trunk/framework/OSMF/org/osmf/net/httpstreaming/HTTPStreamingNetStreamDynamicStreamT rait.as

    Many thanks for the fast reply.
    I've got a follow up question.
    What will happen if I modify the reconnect Code in the OSMF Netloader Class as recommended and then load multiple third party OSMF plugins,
    which may have included the origin OSMF version of the Netloader class.
    Which one will be used at runtime?
    Thanks in advance!

  • In-Context Editor can't handle implicit index.html

    The In-Context Editor can't handle implicit index.html - e.g., when a menu link points to /subfolder/ with a file named index.html inside, the link works fine on the front-end but In-Context Editor users get a 404 Page Not Found error.
    Naively, it appends .htm to the folder, producing an invalid url like this: /subfolder/.htm
    The workaround is to explicitly include the filename in the link like /subfolder/index.html, but this is undesirable.
    Anyone else been bitten by this?

    /subfolder
    Will go to the same place as well.

  • HT3350 what can you do when you get this warning? "Scouting Book  Collection Index" could not be handled because Numbers cannot open files in the "Numbers Document" format.

    “Scouting Book  Collection Index” could not be handled because Numbers cannot open files in the “Numbers Document” format.

    This file was created in 2002 in MSword but this had been forgotten.
    When I went to open it.... (MSword) was not a program option on my laptop. For some reason, I assumed that either Pages or Numbers wouild open the file. How the file was created was forgotten and had no file Info, but I new that it was a table of valuable information.
    The warning threw me off, because it led me to believe that it was a Numbers file. which Numbers would not open. A paradox.
    I took the file to a laptop in OSXpanther with MSoffice and it opened in MSword. Problem Solved.
    Thank you for your help.

  • Invalid request - request handler "Index" not found

    Have deployed JatoSample.war on iPlanet web server. When I tried to access the sample app at http://myserver/JatoSample/samples.html
    it is throwing following error:
    [19/May/2003:10:29:31] warning (19890): vs(https-abeesam2)Application Error
    [19/May/2003:10:29:31] warning (19890): vs(https-abeesam2)javax.servlet.ServletException: Invalid request - request handler "Index" not found
    at com.iplanet.jato.ApplicationServletBase.onRequestHandlerNotFound(ApplicationServletBase.java:371)
    at com.iplanet.jato.ApplicationServletBase.fireRequestHandlerNotFoundEvent(ApplicationServletBase.java:114)
    at com.iplanet.jato.ApplicationServletBase.getViewBeanInstance(ApplicationServletBase.java:224)
    at com.iplanet.jato.ApplicationServletBase.processRequest(ApplicationServletBase.java:569)
    at com.iplanet.jato.ApplicationServletBase.doPost(ApplicationServletBase.java:85)
    at com.iplanet.jato.ApplicationServletBase.doGet(ApplicationServletBase.java:74)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    at com.iplanet.server.http.servlet.NSServletRunner.invokeServletService(NSServletRunner.java:919)
    at com.iplanet.server.http.servlet.WebApplication.service(WebApplication.java:1061)
    at com.iplanet.server.http.servlet.NSServletRunner.ServiceWebApp(NSServletRunner.java:981)
    Any help is much appreciated.
    Ashok Beesam

    For purchased copies of the Sun ONE Application Framework (JATO), please contact Sun Software Support Services. This organization can provide "break and fix" assistance.

Maybe you are looking for

  • Illustrator CS3 crashes under Mac OS X 10.5.5

    Since some days I encountered crashes everytime I want to save or print a file. I didn't installed the last days any Plug In or program that could affect Illustrator. I tried to reset any preference file in any directory. - It didn't works. I tried t

  • Problems when updating the Intel INF file

    I have the MSI 865PE Neo2-FIS2R seems every time i go through with live update and try to load the INF driver it gets started into setup and once it gets to 10% the system reboots...after several attempts to load this driver, the third time it return

  • Lightroom 4 download issues

    Help, I am trying to download the trial version of LR4 but when I click download, I get nothing. I already own LR3 so I am not sure why I can not download this version.

  • HT5535 Does iPhone 5 support "WiFi Direct"

    I want to print to my Brother printer from my iPhone 5 but I do not have a wireless network. The printer says that I can configure a wireless network between the printer and moblie devise (iPhone) with out an access point. The instruction first ask i

  • TS4123 iTunes crashes everytime I open up iTunes Store

    When I open iTunes and try to connect to the iTunes Store it says iTunes not responding and closes the program.  I'm on a Windows 7 64 bit machine.  Any ideas / solutions