Concurrent analogue reading

Dear all,
I am desperately in need for some help.
I am doing a project where 4 batteries can be tested simultaneously.,
right now, I have the program coded as shown in the Vi attached below. But when I try to run it it shows NI-DAQmx Error -50103 "The Specified Resource is Reserved" error.
I came to know that the error was caused when I was trying to read multiple tasks at the same time. what are your thoughts on this. the program works when just one channel si read, but not when more than 1 is read simultaneously.
Thanks,
Labmat
Attachments:
Untitled 1.vi ‏179 KB
DCG_BAT.vi ‏62 KB
CALC_DISP.vi ‏22 KB

I made some changes to your code showing an example of how you can implement the analog input reads. I am using local variables to transfer the data between the parallel loops (just be aware of the race conditions and performance problems of local/global variables.) In this case, your VI is not too complicated so I don't see harm in using local variables to pass information.
Attachments:
Untitled 1.vi ‏179 KB

Similar Messages

  • Concurrent nodes reading from JMS topic (cluster environment)

    Hi.
    Need some help on this:
    Concurrent nodes reading from JMS topic (cluster environment)
    Thanks
    Denis

    After some thinking, I noted the following:
    1 - It's correct that only one node subscribes to a topic at a time. Otherwise, the same message would be processed by the nodes many times.
    2 - In order to solve the load balancing problem, I think that the Topic should be changed by a Queue. This way, each BPEL process from the node would poll for a message, and as soon as the message arrives, only one BPEL node gets the message and take if off the Queue.
    The legacy JMS provider I mentioned in the post above is actually the Retek Integration Bus (RIB). I'm integrating Retek apps with E-Business Suite.
    I'll try to configure the RIB to provide both a Topic (for the existing application consumers) and a Queue (an exclusive channel for BPEL)
    Do you guys have already tried to do an integration like this??
    Thanks
    Denis

  • Concurrent File Read Question

    Hi:
    I am writing a multi-thread program to read from one large file. Different thread will read different part of the file so they do not interfere.
    I am considering using the java.nio's memory-mapped I/O since the file is pretty large. But I cannot find any document regarding
    the thread-safeness of the API.
    Suppose the file is mapped to a MappedByteBuffer, and two threads issue two gets() operations concurrently, can the two gets() run concurrently or the second one has to be blocked until the first one returns, or even worse, incorrect result may be returned?
    Thanks,
    -Yi
    Edited by: grantguo on Oct 22, 2008 11:52 PM

    You don't know what flock() and flock16() function are in Platform OS's.Your guesswork about the extent of my knowledge is just as uninformed and irrelevant as everything else you've posted here. You're certainly in no position to lecture me about flock() when you don't know what it does yourself.
    So a very good clue about java thread-safeness implicitly is java.nio.channels.FileChannel methods for locking and locking zones of a file.In other words you haven't even read the Javadoc for the solution you're recommending. The quotation in Roy's post above shows that FileLock cannot possibly be the solution to any threading problem.
    The rest of your posting is equally irrelevant and erroneous. I wrote a paper criticising the semantics of lockf(), now flock(), in 1984. Whatever flock16() may be, Google doesn't know what it is either. Both are irrelevant. Every OS I have ever used allows concurrent reads to a file, unless you lock it. Neither Unix nor Windows has a 'VM level'. The 'JVM base drivers' are a figment of your imagination. There are no drivers in the JVM. What you're talking about is just the JVM.
    The OP's 'main worries' are precisely this: whether it is thread-safe to read a file via multiple threads via a MappedByteBuffer. He has no need to lock the file and it wouldn't solve this problem even if he did. It wouldn't solve any problem unless there were simultaneous writers to the file, which he hasn't mentioned.
    He may need to synchronize the threads against each other because MappedByteBuffer isn't documented to be thread-safe (although I suspect that the methods that take an offset argument actually are).
    Most of this has already been stated above, before you appeared in the discussion. You're just wasting time and space with your erroneous and irrelevant contributions. You've been warned about this before and you've been blocked before on account of it. Moderators are watching this thread and your other postings.

  • Concurrent read-only access vs syncronized

    I have a certain object that contains two public HashMaps. According to documentation the HashMaps are not syncronized by default, although its possible to have it syncronized using some methods on the Collection interface.
    These HashMaps are only populated at the beginning of the process by a single thread and from here no more updates, inserts or deletes are done.
    Later on several diferent threads are able to concurrently read data out of these Maps.
    My question is: if the only access type is read-only am I really obligated to syncronize the Maps ?
    Is there any problem if two concurrent threads READ from the same memory space ?

    An interesting question. I believe that reading without synchronisation would be safe. If you are writing a package that will be used by others, you should use "final" and "private" where appropriate to guarantee that nobody will extend your class to encorporate a concurrent writer.

  • Java.util.concurrent.ConcurrentHashMap

    All,
    I prefer to use the java.util.ConcurrentHashMap over a Hashtable but there are some points regarding this structure that are not very clear to me.
    From java.util.concurrent: Class ConcurrentHashMap:
    "A hash table supporting full concurrency of retrievals and adjustable expected concurrency for updates. This class obeys the same functional specification as Hashtable, and includes versions of methods corresponding to each method of Hashtable.
    *+However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access. This class is fully interoperable with Hashtable in programs that rely on its thread safety but not on its synchronization details. Iterators are designed to be used by only one thread at a time."+ *
    Also from: Java API: Package java.util.concurrent, we read:
    "The "Concurrent" prefix used with some classes in this package is a shorthand indicating several differences from similar "synchronized" classes. For example java.util.Hashtable and Collections.synchronizedMap(new HashMap()) are synchronized. But ConcurrentHashMap is "concurrent". A concurrent collection is thread-safe, but not governed by a single exclusion lock. In the particular case of ConcurrentHashMap, it safely permits any number of concurrent reads as well as a tunable number of concurrent writes. "Synchronized" classes can be useful when you need to prevent all access to a collection via a single lock, at the expense of poorer scalability. In other cases in which multiple threads are expected to access a common collection, "concurrent" versions are normally preferable. And unsynchronized collections are preferable when either collections are unshared, or are accessible only when holding other locks."
    Based on above, is this correct of I say:
    When using a structure like Hashtable, all the methods or operations are synchronized,
    meaning if one thread is accessing the Hashtable (by get(), put(),... or other methods on this structure), it owns the lock and all other threads will lock out until the thread that owns the lock releases the lock; which means only one thread can access the hash table at a time; which can cause performance issues.
    We need to use a synchronized block or method only of two threads modify a "shared resource", if they do not modify a shared resource, we do not need to use the synchronization.
    On the other hand, the methods of ConcurrentHashMap are not synchronized; so multiple threads can access the ConcurrentHashMap at the same time. But isn't the ConcurrentHashMap itself the "shared resource" that threads are accessing? Should we use it only if the threads are reading from map and not writing to it? And then if threads also write to the structure, then it looks like its better to not to use the ConcurrentHashMap, rather use the regular HashMap with the synchronized wrapper?
    Any help is greatly appreciated.

    We need to use a synchronized block or method only of two threads modify a "shared resource", if they do not modify a shared resource, we do not need to use the synchronization. Actually, you need to synchronize access to the shared resource for both readers and writers. If one thread is updating an unsynchronized HashMap, and a concurrent thread tries to read that map, it may be in an inconsistent state. When synchronizing on the map, the reader will be blocked until the writer completes.
    What you don't need to do is prevent multiple readers from accessing the map, if there's no writer. However, a synchronized map or HashTable will single-thread reads as well as writes.
    On the other hand, the methods of ConcurrentHashMap are not synchronized; so multiple threads can access the ConcurrentHashMap at the same time. But isn't the ConcurrentHashMap itself the "shared resource" that threads are accessing? No, it's actually synchronized at a finer level. Without getting into the details of HashMap implementation, an object's hashcode is used to identify a linked list of hashmap entries. The only time that you have a concurrency issue is when a reader and writer are accessing the same list. So the ConcurrentHashMap locks only the list that's being updated, and allows readers (and writers!) to access the other bucket lists. Plus, it allows two readers to proceed concurrently.

  • Stored Procedure Concurrency Problem 10g

    dear all,
    Please any one could help on this my problem is appreciated.......
    i'm generating ticket numbers using stored procedure as below .
    i need to know followings .....(i'm using oracle 10g)
    1 .Does oracle stored procedure handle concurrency by default or does db manage concurrency when we using sps or do we have handle concurrency inside a stored procedure?
    2.when i generating ticket no using this stored procedure is there any concurrency issue when 100 clients are access it concurrently???
    3. Is there issue or bug in my java code??????????
    4.I have already used select for update statement but when i used that in db rowlocks are hanging and db become stuck .........
    SELECT serial_no into newSerial FROM SERIAL_TAB WHERE BR_CODE=xbranch AND SCH_CODE = xscheme for update;
    5. and in my where clause i pass branch and scheme eg:SELECT serial_no into newSerial FROM SERIAL_TAB WHERE BR_CODE=xbranch AND SCH_CODE =
    xscheme;
    when i run this sp oracle return the error 'more than one row return by query'
    but when run query seperately it will return exactly one row for same brach code and scheme code no duplicates.
    why this happen and it also happen to update statement it will ignore branch code and update for all schemes
    UPDATE SERIAL_TAB SET serial_no=newSerial WHERE BR_CODE=xbranch AND SCH_CODE = xscheme;
    what should i do ? sorry for my long question since i'm in deep trouble.....................
    could any one can help please................................
    in my java code i use transaction and setAutoCommit(false) when calling this sp
    public String getTicketNo(String br,String sch){
    //getconnection
    //setAutoCommit(False);
    //call sp get return value ;
    //commit;
    //if error rollback transaction
    create or replace PROCEDURE sp_generate_ticket (
    xbranch in varchar,
    xscheme in varchar ,
    xresult OUT VARCHAR
    ) AS
    BEGIN
    newSerial:=0;
    SELECT serial_no into newSerial FROM SERIAL_TAB WHERE BR_CODE=xbranch AND SCH_CODE = xscheme;
    newSerial:=newSerial+1;
    UPDATE SERIAL_TAB SET serial_no=newSerial WHERE BR_CODE=xbranch AND SCH_CODE = xscheme;
    --- do other operations -------------------------------------------------------------------------------------
    END;
    Best Regards,
    Pradeep.
    Edited by: user8958520 on Jan 1, 2012 10:02 PM

    user8958520 wrote:
    i need to know followings .....(i'm using oracle 10g)
    1 .Does oracle stored procedure handle concurrency by default or does db manage concurrency when we using sps or do we have handle concurrency inside a stored procedure?Oracle is a multi-user and multi-process system. It supports concurrency. It also requires the developer to design and write "+thread safe+" code. Its concurrency cannot address and fix design flaws in application code.
    2.when i generating ticket no using this stored procedure is there any concurrency issue when 100 clients are access it concurrently???That depends entirely on WHAT that procedure code does. And whether that code is thread safe.
    4.I have already used select for update statement but when i used that in db rowlocks are hanging and db become stuck .........
    SELECT serial_no into newSerial FROM SERIAL_TAB WHERE BR_CODE=xbranch AND SCH_CODE = xscheme for update;Horrible and utterly flawed approach. This forces serialisation. This means if that procedure is call by a 100 clients, only a SINGLE client can be serviced at a time. ALL OTHERS need to queue and WAIT.
    Serialisation kills database performance.
    What you have is a serious design flaw. Not an Oracle issue. And there is no magic solution to make this flawed approach work in a performant and scalable manner. This flaw introduces artificial contention. This flaw enforces serialisation. This flaw means that your application code WILL step on its own toes time and time again.
    The proper solution is to fix this design flaw - and not use poorly conceived procedures such as sp_generate_ticket that violates fundamental concurrency principles.

  • Concurrent access issue with JPA  for a Select query?

    Hi All,
    I have been trying to understand why this code,
    public <T> List<T> findManyNativeSql(String queryString, Class<T> resultClass)
                Query aQuery = getEntityManager().createNativeQuery(queryString,resultClass); // Throwing the following exceptionis causing this exception.
    <openjpa-1.1.0-r422266:657916 fatal general error> org.apache.openjpa.persistence.PersistenceException: Multiple concurrent th
    reads attempted to access a single broker. By default brokers are not thread safe; if you require and/or intend a broker to be
    accessed by more than one thread, set the openjpa.Multithreaded property to true to override the default behavior.
            at org.apache.openjpa.kernel.BrokerImpl.endOperation(BrokerImpl.java:1789)
            at org.apache.openjpa.kernel.BrokerImpl.isActive(BrokerImpl.java:1737)
            at org.apache.openjpa.kernel.DelegatingBroker.isActive(DelegatingBroker.java:428)
            at org.apache.openjpa.persistence.EntityManagerImpl.isActive(EntityManagerImpl.java:606)
            at org.apache.openjpa.persistence.PersistenceExceptions$2.translate(PersistenceExceptions.java:66)
            at org.apache.openjpa.kernel.DelegatingBroker.translate(DelegatingBroker.java:102)
            at org.apache.openjpa.kernel.DelegatingBroker.newQuery(DelegatingBroker.java:1227)I have tried looking at the query which gets printed in the logs when the exception is thrown
    [[ACTIVE] ExecuteThread: '32' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR jpa
    - ID: 133  queryString= select * from Details where cust_name='SETH'Any suggestions on the following would be very helpful
    Also, AppServer: WL10.3 is being used.
    VR

    I'm not sure what a Broker is in OpenJPA so you may want to post in an OpenJPA forum. I would suspect though that a broker is underneath the EntityManager, and it might suggest that this EntityManager instance is being shared among threads. Verify that the EntityManager returned is not being used in multiple threads; if it is used in multiple threads concurrently, this needs to be changed to obtian a new one and release it when done as they are not thread safe. You might also try using EclipseLink as the JPA provider to see if you get a different error message that might point out the problem.
    Best Regards,
    Chris

  • How to pass IN parameter as BOOLEAN for concurrent program in Apps(Environ)

    hi all
    i am using a standard package procedure,where in which i need to pass some parameters to a procedure,
    some of the parameters there are BOOLEAN type ,can anybody help me to know , How to pass IN parameter as BOOLEAN for concurrent program in Apps(Environ)

    Already answered this on the SQL forum (How to give IN parameter as BOOLEAN in a concurrent program.

  • Error when reading BLOB field from Oracle usin Toplink

    We experience a very annoying problem when trying to read a BLOB
    field from Oracle 8.1.6.2.0 using TOPLink 3.6.3. I have attached the
    exception stack trace that is reported to the console. As far as I can
    judge a fault at oracle.sql.LobPlsqlUtil.plsql_length() happens first and
    then at TOPLink.Private.DatabaseAccess.DatabasePlatform.convertObject().
    The exception is permanently repeating that is very critical for us.
    ServerSession(929808)--Connection(5625701)--SELECT LOBBODY, ID, LABEL, FK_OBJECT_ID, FK_OBJECTTYPE FROM NOTE WHERE (ID = 80020)
    INTERNAL EXCEPTION STACK:
    java.lang.NullPointerException
    at oracle.sql.LobPlsqlUtil.plsql_length(LobPlsqlUtil.java:936)
    at oracle.sql.LobPlsqlUtil.plsql_length(LobPlsqlUtil.java:102)
    at oracle.jdbc.dbaccess.DBAccess.lobLength(DBAccess.java:709)
    at oracle.sql.LobDBAccessImpl.length(LobDBAccessImpl.java:58)
    at oracle.sql.BLOB.length(BLOB.java:71)
    at TOPLink.Private.Helper.ConversionManager.convertObjectToByteArray(ConversionManager.java:309)
    at TOPLink.Private.Helper.ConversionManager.convertObject(ConversionManager.java:166)
    at TOPLink.Private.DatabaseAccess.DatabasePlatform.convertObject(DatabasePlatform.java:594)
    at TOPLink.Public.Mappings.SerializedObjectMapping.getAttributeValue(SerializedObjectMapping.java:43)
    at TOPLink.Public.Mappings.DirectToFieldMapping.valueFromRow(DirectToFieldMapping.java:490)
    at TOPLink.Public.Mappings.DatabaseMapping.readFromRowIntoObject(DatabaseMapping.java:808)
    at TOPLink.Private.Descriptors.ObjectBuilder.buildAttributesIntoObject(ObjectBuilder.java:173)
    at TOPLink.Private.Descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:325)
    at TOPLink.Private.Descriptors.ObjectBuilder.buildObjectsInto(ObjectBuilder.java:373)
    at TOPLink.Public.QueryFramework.ReadAllQuery.execute(ReadAllQuery.java:366)
    at TOPLink.Public.QueryFramework.DatabaseQuery.execute(DatabaseQuery.java:406)
    I have started the application with Oracle JDBC logging on and found that the problem may originate in a possible lack of syncronization in the pooled connection implementation:
    DRVR FUNC OracleConnection.isClosed() returned false
    DRVR OPER OracleConnection.close()
    DRVR FUNC OracleConnection.prepareCall(sql)
    DRVR DBG1 SQL: "begin ? := dbms_lob.getLength (?); end;"
    DRVR FUNC DBError.throwSqlException(errNum=73, obj=null)
    DRVR FUNC DBError.findMessage(errNum=73, obj=null)
    DRVR FUNC DBError.throwSqlException(reason="Logical handle no longer valid",
    SQLState=null, vendorCode=17073)
    DRVR OPER OracleConnection.close()
    so the prepareCall() is issued against an already closed connection and the
    call fails.
    I assume we have been using a JDBC 2.0 compliant driver. We tried out
    drivers that Oracle supplies for 8.1.6, 8.1.7 versions. To be true I
    couldn't find any information about the JDBC specification they conform to. Does it
    mean that these drivers may not be 100%-compatible with JDBC 2.0 Spec?
    How can I find out if they are 2.0 compliant?
    Also I have downloaded Oracle 9.2.0.1 JDBC drivers. This seemed to work
    fine until we found another incompatibility which made us return back to
    8.1.7 driver:
    UnitOfWork(7818028)--Connection(4434104)--INSERT INTO STATUSHISTORY (CHANGEDATE, FK_SET_STATUS_ID) VALUES ({ts '2002-10-17 16:46:54.529'}, 2)
    INTERNAL EXCEPTION STACK:
    java.sql.SQLException: ORA-00904: invalid column name
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
    at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
    at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:573)
    at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1891)
    at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1093
    at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.jav
    a:2047)
    at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java
    :1940)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatemen
    t.java:2709)
    at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePrepare
    dStatement.java:589)
    at TOPLink.Private.DatabaseAccess.DatabaseAccessor.executeDirectNoSelect(
    DatabaseAccessor.java:906)
    at TOPLink.Private.DatabaseAccess.DatabaseAccessor.executeNoSelect(Databa
    seAccessor.java:960)
    at TOPLink.Private.DatabaseAccess.DatabaseAccessor.executeCall(DatabaseAc
    cessor.java:819)
    at TOPLink.Public.PublicInterface.UnitOfWork.executeCall(UnitOfWork.java:

    Hello Yury,
    I believe the problem is that TopLink's ServerSession by default executes read queries concurrently on the same connection. It does this to reduce the number of required connections for the read connection pool. Normally this is a good concurrency optimization, however some JDBC drivers have issues when this is done. I had thought that with Oracle JDBC 8.1.7 this issue no longer occurred, but perhaps it is only after version 9. I believe that the errors were only with the thin JDBC driver, not the OCI, so using the OCI driver should also resolve the problem. Using RAW instead of BLOB would also work.
    You can configure TopLink to resolve this problem through using exclusive read connection pooling.
    Example:
    serverSession.useExclusiveReadConnectionPool(int minNumerOfConnections, int maxNumerOfConnections);
    This will ensure that TopLink does not to try to concurrently execute read queries on the same connection.
    I'm not exactly sure what your second problem with the 9.x JDBC drivers is. From the SQL and stack trace it would seem that the driver does not like the JDBC timestamp syntax. You can have TopLink print timestamp in Oracle's native SQL format to resolve this problem.
    Example:
    serverSessoin.getLogin().useNativeSQL();
    Make sure you configure your server session before you login, or if using the TopLink Session Manager perform the customizations through a SessionEventListener-preLogin event.

  • Q about Readwrite lock in concurrency

    If I want to implement read-write data through a ReentrantReadwriteLock,
    often we will use :
    try {
    mylock.lock();
    activity...
    finally{ mylock.unlock();}
    but what if the real"activity" is actually done else where. Can I just make two methods SetLock() and ReleaseLock()
    void SetLock(){
    mylock.lock();}
    void ReleaseLock(){
    mylock.unlock;}
    and let them called when needed

    You're getting this compeltely back to front. Somewhere you have to create the lock object, store it in a variable, acquire the lock, call the method, and release the lock. The only correct way to do that is to release the lock in a 'finally' block, which implies that you have to allocate the lock in the same method so you have the lock reference. So what you are looking for is this:
    try
      mylock.lock();
      callTheMethodThatDoesTheActivity();
    finally
      mylock.unlock();
    }BTW this thread is in the wrong forum. The experts on concurrency are concentrated in the Concurrency forum.

  • Limit number of concurrent connection

    is there a way for me to limit the number of concurrent
    connections to the media server. my upload bandwitdh is about a meg
    so having too many connections just kills the whole thing. i would
    like to put a limit at 5 connections. is there a way to do
    this?

    well i am only using flash media server to displaying videos
    and just that. the reason i went with media server vs. progressive
    was fullscreen method. since flash doesn't natively support
    fullscreen you have to open another browser in fullscreen with
    javascript. the problem with that is it start loading from
    beginning so i decided to go server that way it only has to buffer
    and get to the same point without having to load the whole video up
    to that point. Now i want to limit number people that can be
    watching the video. if i get more that 5 connection the whole thing
    slows down. i am not too familiar with server based connections so
    i don't know how to limit the number concurrent connections.

  • Concurrent Mgr: ICM Vs Internal Monitor Vs Transaction Mgr Vs Service Mgr

    Hi,
    Could any one please detail me the difference/relation between Internal Concurrent Manager, Internal Monitor, Transation Manager, Service Manager.
    Envrionment:
    11.5.10.2. EBS on 4 Node RAC envrionment
    Thanks
    Cherrish Vaidiyan

    Cherrish,
    All CM types and functions are explained in [Oracle Applications System Administrator Guide - Configuration|http://download-uk.oracle.com/docs/cd/B25516_14/current/acrobat/115sacg.zip] manual, Chapter 7 - Dening Concurrent Managers.

  • How to manage concurrency of a class

    how to manage concurrency of a class???????????

    I think you have to use thread for concurrency>>>>..

  • Reg : Setup alerts for Concurrent Programs

    Hi,
    We are using OEM 10g gridcontrol for monitoring.
    i am new to OEM, now i have a task to set up alerts for concurrent programs(which are running more than 30mins).
    can any one provide step by step process to setup alerts.
    Thanks,
    Chandra

    I believe the "Concurrent Manager" is a product supported in the "E-Business" forum.
    Can a moderator move the question there? (with other questions on this product)
    I assume this is not related the Java's concurrency library.

  • A way to list custom concurrent programs, forms & reports on 1159?

    I hope this isn't a stupid question!
    I've just started 11i support and I've been handed a mid-size installation. Needless to say, documentation is lacking. Is there a way to find out what is custom and what is not?

    The custom programs belong to customer applications.
    Run query against your custom application names and you'll get all the custom concurrent programs.

Maybe you are looking for

  • Purchased songs from iTunes via iPhone 4 but files not found on comp after sync

    I purchased an entire album and one additional song from iTunes on my phone while at work on a wireless network.  When I got home it started the transfer/purchase updates or whatever.  I don't know what happened I don't think there was an interruptio

  • How to configure Oracle 10g Advanced Security to use SSL concurrently with

    How to configure Oracle 10g Advanced Security to use SSL concurrently with database User names and passwords In Oracle Advanced Security Documentation it is mentioned that i can use SSL concurrently with DB user names and passwords. But when i config

  • Swapping circuit boards for an internal hard drive

    My story may be a bit long and roundabout, so please try and bear with me as I explain. My PowerMac G5 is 4.5 years old and over the past few months, the DVD and my secondary internal hard drive, both the originals that came with my computer, have be

  • PDF form that you can type in and save

    I am working to create forms that my student athletes will need to type into and save so that they can upload them to our online pre-participation site. Is there any special way I need to save my Adobe PDFs?  I saved them normally and one student had

  • Images removed from Albums

    Browsing through some old projects, I came accross one in which 12 out of 13 albums are mysteriously empty. The masters are still there in the project but they have been removed from the albums into which they were imported. Is this something that th