Confused about volatile regarding the examples in JSDK doc 1.5

In java docs for JSDK-1.5 there is an example for ReentrantReadWriteLock like below:
class CachedData {
   Object data;
   volatile boolean cacheValid;
   ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
   void processCachedData() {
     rwl.readLock().lock();
     if (!cacheValid) {
        // upgrade lock manually
        rwl.readLock().unlock();   // must unlock first to obtain writelock
        rwl.writeLock().lock();
        if (!cacheValid) { // recheck
          data = ...
          cacheValid = true;
        // downgrade lock
        rwl.readLock().lock();  // reacquire read without giving up write lock
        rwl.writeLock().unlock(); // unlock write, still hold read
     use(data);
     rwl.readLock().unlock();
}Why is it neccessary cacheValid to be volatile. If it is neccessary because more than 1 thread can access the variable then in the example for the Condition interface, the variables putptr, takeptr and count why are not volatile too(they are accessed by more threads too)? The ex for the Condition interface in java doc is:
class BoundedBuffer {
   final Lock lock = new ReentrantLock();
   final Condition notFull  = lock.newCondition();
   final Condition notEmpty = lock.newCondition();
   final Object[] items = new Object[100];
   int putptr, takeptr, count;
   public void put(Object x) throws InterruptedException {
     lock.lock();
     try {
       while (count == items.length)
         notFull.await();
       items[putptr] = x;
       if (++putptr == items.length) putptr = 0;
       ++count;
       notEmpty.signal();
     } finally {
       lock.unlock();
   public Object take() throws InterruptedException {
     lock.lock();
     try {
       while (count == 0)
         notEmpty.await();
       Object x = items[takeptr];
       if (++takeptr == items.length) takeptr = 0;
       --count;
       notFull.signal();
       return x;
     } finally {
       lock.unlock();
}

Ok. What's protocol here? Do I post an admission that I probably should not have posted and answer in the first place, since I don't know, or just quietly go away and avoid gunking up the forum with yet another useless reply.
@OP, sorry I didn't understand your original question. I'll watch the thread in the hopes of learning something. If you find out, please post what you learn.
� {�                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • Confused about XPostFacto and the 8GB Limit - Please help

    Hi all,
    I've been running XPostFacto 4.0 on my Apple G3 Desktop (Rev C, 640MB Ram, 533 Mhz G4 OWC CPU upgrade) for sometime and is running great.
    But I have a question about the 8GB rule.
    This computer only runs off of a Seritek SATA card. There is a 35 GB Raptor drive on it, the first 7.81 GB is the 10.3.9 drive, and rest is for applications and files.
    I added a 160 GB Hitachi drive to the SATA card that I moved from my G4 MDD computer. It already has 3 partitions: 30 GB for 10.3.9, 10GB for OS9, and 115 GB for audio production files.
    If I wanted to boot off the Hitachi drive, would I have to re-format and create a 8 GB partition?
    As is, if I try to boot off the drive I just added, XPostFacto tries to synchronize the files, then it invariably crashes.
    I tried reading the manual again, but I get confused about the Mac OS X Installer and the 8GB rule, and so on.
    Thanks!

    i wound up erasing the partition on the transferred drive and carboncopycloning the beige's OS X drive to it...and now the transferred drive is booting ok!
    now, i'm thinking of copying the whole raptor drive over and re-partitioning the raptor drive (strangely, the raptor appears with about 9 partitions on Disk Utility, though the only two that i created are clickable...wonder if the OS sees this drive as something like scsi).
    this is a little asides the point, but on my MDD, the 7200 hitachi seemed to test as "faster" than the 10000rpm raptor in most categories. would the 7200 drive infact be better for OS style-tasks?
    Powerbook Alum 15 G4, MDD Dual 867 Mhz, Beige Minitower G3 Mac OS X (10.3.9)

  • Some confusion about files on the hard drive and those in LR

    i have been importing files directly into LR and at the same time creating a folder in My Pictures where the original files are stored.  i then edit in LR.
    in order to save and then backup up edited files i export them to another folder marked "fotos 2010 LR", the first folder being "fotos2010'.
    my confusion is having both folders.  i know that if i delete the original folder it will also delete those corresponding files in LR.
    reading this over now i suppose i could export to "fotos2010" folder and write over original files but im afraid to do that.  is that a good idea?
    am i doing something wrong?  i feel like i'm missing something about LR works.
    thanks
    harvey

    The best way forwards rather depends on how far you have gone with your 'new' catalog.
    On the one hand you have an 'old' catalog that contains your edits and your collections, but because you moved the original files and folders outwith LR the catalog, and it's record of the files it knows bout, is 'disconnected' from the source files.
    On the other you have a 'new' catalog that knows where all the pictures are but doesn't have your changes and collections.
    If you haven't done much work with the new catalog you might be better to open the original catalog and update it with the location of files it has lost sight of. In library mode right click a directory showing errors and select 'Update folder location'. Depending on the scale of your mix up this might be quite time consuming, proceed slow and sure and all will be OK in the end. What you won't have after this is any work you've done in the new catalog.
    If you've done a lot of work in the new catalog then you are better off staying with it and trying to recover work from the old catalog. You'd have to do the updating of links discussed above first but having done so you can then write the changes made by LR out to the original pciture files. The 'picture' doesn't change but the file now has a list of changes applied by LR stored in a manner that can be interpreted by LR and other programs (technically not within the file itself but an associated side-car file). With the old catalog open select 'Save metadata to File', with the appropriate pictures selected. Once the data has been written out you can open the new catalog and 'Read Metadata from Files' to get the new catalog to be aware of the changes held within the picture file. That won't recover your collections however, I think you're stuck there. I don't know of a way to 'export' a collection and 'import' it into a new catalog. Given that a collection is essentially a list of picture files that LR knows about I can't really see how it would be possible given that the original problem is that the old and new catalogs have a different idea of where the original source photo files are.
    My gut feel - Unless the help here has made you pretty confident I'd stick with the new catalog, take the hit and recreate the collections from scratch.
    Cheers

  • I am confused about documents in the cloud.

    How does this work? Am I supposed to be able to put a document on my Pro and have it sync to my iPhone and iPad? If so, once I put a document on my Pro, where will it appear on my iPhone? I'm confused.

    If it was in Numbers on the iPad it will be in Numbers on the Mac, etc.

  • Confused about schema..

    I'm currently working on a project using VB 2010 Express and Oracle 11g XE. On my Oracle My SQL Developer, I am currently logged on as 'user X' and I created three tables. All of which are just listed on the 'tables' tree. Does it mean that 'user X' is like a schema when compared to MySQL?
    I have already integrated my database to my vb project and is working fine ATM. I'm just really confused about schemas in Oracle 11g XE.

    I'm just really confused about schemas in Oracle 11g XE.http://docs.oracle.com/cd/E11882_01/server.112/e25789/intro.htm#CEGJFFFD

  • Confusion about recovery

    Hi ,
    I am new to the DBA field and I have a confusion about recovery.
    The confusion is if a database is in noarchivelog mode can a database be recovered from commited changes that were there in the redo log files ?
    If I provide the path name of the redo log files while using recover database using cancel will it work at all given that the database is in noarchivelog mode ?
    Please help to clear my doubts..

    Oracle can use the Online Redo Logs for Recovery. Normally this happens in the case of Instance Recovery (e.g. from a server crash or shutdown abort) -- where the datafiles are not restored from a prior backup.
    If you restore datafiles from a prior backup, you are doing a media recovery. In NOARCHIVELOG mode, you could not have run a backup with the database OPEN, so the backup would have been run with the database SHUTDOWN or MOUNTed. At the subsequent startup, transactions would be in the online redo logs only until LGWR does a "wrap around" and overwrites the first redo log used after the startup. It is only within this window that transactions are in the redo logs.+
    Remember that LGWR uses a "round-robin" algorithm to cycle through the online redo logs. So, if the Online Redo Log that was CURRENT at the time of the backup has been overwritten, you cannot use the Online Redo Logs for a RECOVERy+._
    You must also ensure that there are no NOLOGGING operations !!
    One thing that you might trip up on is the behaviour of CTAS. A "CREATE TABLE AS SELECT" is, by default LOGGING in an ARCHIVELOG database. However, it is automatically a Direct Path operation in a NOARCHIVELOG database ! So the blocks for such a table would be "corrupt" if you attempt a recovery from the Online Redo Log as the row inserts are not captured.
    Hemant K Chitale
    Edited by: Hemant K Chitale on Oct 10, 2011 11:43 AM
    Edited by: Hemant K Chitale on Oct 10, 2011 11:44 AM
    Edited by: Hemant K Chitale on Oct 10, 2011 4:33 PM
    Edited by: Hemant K Chitale on Oct 10, 2011 4:34 PM

  • I am using the Order Analysis Toolkit and want to get more information about the compensation for "Reference Signal Processing", which is scarce in the manuals, the website and the examples installed with the toolkit.

    I am using the Order Analysis Toolkit and want to get more information about the compensation for "Reference Signal Processing", which is scarce in the manuals, the website and the examples installed with the toolkit.
    In particular, I am analyzing the example "Even Angle Reference Signal Processing (Digital Tacho, DAQmx).vi", whose documentation I am reproducing in the following:
    <B>DESCRIPTIONS</B>:
    This VI demonstrates how to extract even angle reference signals and remove the slow-roll errors. It uses DAQmx VIs to acquire sound or vibration signals and a digital tachometer signal. This VI includes a two-step process: acquire data at low rotational speed to extract even angle reference; use the even angle reference to remove the errors in the vibration signal acquired at normal operation.
    <B>INSTRUCTIONS</B>:
    1. Run the VI.
    2. On the <B>DAQ Configurations</B> tab, specify the <B>sample rate</B>, <B>samples per channel</B>, device and channel configurations, and tachometer channel information.
    <B>NOTE</B>: You need to use DSA PXI-447x/PXI-446x and PXI TIO device in a PXI chassis to run this example. The DSA device must be in slot 2 of the PXI chassis.
    3. Switch to <B>Extract Even Angle Reference</B> tab. Specify the <B>number of samples to acquire</B> and the <B># of revs in reference</B> which determines the number of samples in even angle reference. Click <B>Start</B> to take a one-shot data acquisition of the vibration and tachometer signals. After the acquisition, you can see the extracted even angle references in <B>Even Angle Reference</B>.
    4. Switch to the <B>Remove Slow-roll Errors</B> tab. Click <B>Start</B> to acquire data continuously and view the compensate results. Click <B>Stop</B> in this tab to stop the acquisition.
    <B>ORDER ANALYSIS VIs USED IN THIS EXAMPLE</B>:
    1. SVL Scale Voltage to EU.vi
    2. OAT Digital Tacho Process.vi
    3. OAT Get Even Angle Reference.vi
    4. OAT Convert to Even Angle Signal.vi
    5. OAT Compensate Even Angle Signal.vi
    My question is: How is the synchronization produced at the time of the compensation ? How is it possible to eliminate the errors in a synchronized fashion with respect to the surface of the shaft bearing in mind that I am acquired data at a low rotation speed in order to get the "even angle reference" and then I use it to remove the errors in the vibration signal acquired at normal operation. In this application both operations are made in different acquisitions, therefore the reference of the correction signal is lost. Is it simply compensated without synchronizing ?
    Our application is based on FPGA and we need to clarity those aspects before implementing the procedure.
    Solved!
    Go to Solution.

    Hi CracKatoA.
    Take a look at the link bellow:
    http://forums.ni.com/ni/board/message?board.id=170&message.id=255126&requireLogin=False
    Regards,
    Filipe Silva

  • Confuse about the injecting entity in EJB 3.0?

    Hi all,
    I have an customersBean which is inherit from customersRemote and my problem is i' am little confuse about injecting the entity(customer).
    Where can you apply the EntityManagerFactory is it outside on EJB or Inside the EJB? means outside EJB is use the web application or java application. i have and example.
    this is inside on EJB...............
    public class CustomersBean implements com.savingsaccount.session.CustomersRemote {
    @PersistenceContext(unitName="SavingAccounts")
    EntityManagerFactory emf;
    EntityManager em;
    /** Creates a new instance of CustomersBean */
    public CustomersBean() {
    public void create(int id, String name, String address, String telno, String mobileno)
    try{
    //This is the entity.
    Customer _customer = new Customer();
    _customer.setId(id);
    _customer.setName(name.toString());
    _customer.setAddress(address.toString());
    _customer.setTelno(telno.toString());
    _customer.setMobileno(mobileno.toString());
    em = emf.createEntityManager();
    em.persist(_customer);
    emf.close();
    }catch(Exception ex){
    throw new EJBException(ex.toString());
    in web application, i'm using the @EJB in customer servlets.
    public class CustomerProcessServlet extends HttpServlet {
    @EJB
    private CustomersRemote customerBean;
    blah blah inject directly coming request field from jsp.
    }

    Hi all,
    I have an customersBean which is inherit from customersRemote and my problem is i' am little confuse about injecting the entity(customer).
    Where can you apply the EntityManagerFactory is it outside on EJB or Inside the EJB? means outside EJB is use the web application or java application. i have and example.
    this is inside on EJB...............
    public class CustomersBean implements com.savingsaccount.session.CustomersRemote {
    @PersistenceContext(unitName="SavingAccounts")
    EntityManagerFactory emf;
    EntityManager em;
    /** Creates a new instance of CustomersBean */
    public CustomersBean() {
    public void create(int id, String name, String address, String telno, String mobileno)
    try{
    //This is the entity.
    Customer _customer = new Customer();
    _customer.setId(id);
    _customer.setName(name.toString());
    _customer.setAddress(address.toString());
    _customer.setTelno(telno.toString());
    _customer.setMobileno(mobileno.toString());
    em = emf.createEntityManager();
    em.persist(_customer);
    emf.close();
    }catch(Exception ex){
    throw new EJBException(ex.toString());
    in web application, i'm using the @EJB in customer servlets.
    public class CustomerProcessServlet extends HttpServlet {
    @EJB
    private CustomersRemote customerBean;
    blah blah inject directly coming request field from jsp.
    }

  • Confused about extending the Sprite class

    Howdy --
    I'm learning object oriented programming with ActionScript and am confused about the Sprite class and OO in general.
    My understanding is that the Sprite class allows you to group a set of objects together so that you can manipulate all of the objects simultaneously.
    I've been exploring the Open Flash Chart code and notice that the main class extends the Sprite class:
    public class Base extends Sprite {
    What does this enable you to do?
    Also, on a related note, how do I draw, say, a line once I've extended it?
    Without extending Sprite I could write:
    var graphContainer:Sprite = new Sprite();
    var newLine:Graphics = graphContainer.graphics;
    And it would work fine. Once I extend the Sprite class, I'm lost. How do I modify that code so that it still draws a line? I tried:
    var newLine:Graphics = this.graphics;
    My understanding is that since I'm extending the Sprite class, I should still be able to call its graphics method (or property? I have no idea). But, it yells at me, saying "1046: Type was not found or was not a compile-time constant: Graphics.

    Thanks -- that helped get rid of the error, I really appreciate it.
    Alas, I am still confused about the extended Sprite class.
    Here's my code so far. I want to draw an x-axis:
    package charts {
        import flash.display.Sprite;
        import flash.display.Graphics;
        public class Chart extends Sprite {
            // Attributes
            public var chartName:String;
            // Constructor
            public function Chart(width:Number, height:Number) {
                this.width = width;
                this.height = height;
            // Methods
            public function render() {
                drawAxis();
            public function drawAxis() {
                var newLine:Graphics = this.graphics;
                newLine.lineStyle(1, 0x000000);
                newLine.moveTo(0, 100);
                newLine.lineTo(100, 100);
    I instantiate Chart by saying var myChart:Chart = new Chart(); then I say myChart.render(); hoping that it will draw the axis, but nothing happens.
    I know I need the addChild method somewhere in here but I can't figure out where or what the parameter is, which goes back to my confusion regarding the extended Sprite class.
    I'll get this eventually =)

  • Confused about the log files

    I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
    The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
    The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
    Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
    Please help - it is driving me nuts!

    Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
    a. is the application doing anything that prohibits log cleaning? (in your case, no)
    b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
    c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
    1) Ran DbDump with and withour -r. I am expecting the
    data to stay consistent. So, after the first run it
    creates the data, and leaves 20mb in place, 3 log
    files near 100% used. After the second run it should
    update the records (which it does from the
    applications point of view) but I now have 40mb
    across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
    Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
    run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
    run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
    run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
    So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
    I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
    I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
    java -jar je.jar DbPrintLog -h <envhome> -S
    and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
    The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
    A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
    In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
    So in summary, let's try these steps
    - use DbDump and DbPrintLog to double check the amount and size of your application data
    - make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
    - run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
    If it all points to JE, we'll probably take it offline, and ask for your test case.
    Regards,
    Linda

  • Confused about the meaning of  Time Quota Types

    I am learning SAP-HCM on IDES 6.0. I am confused about the meaning of Time Quota Type. I have gone thru the SAP documentation, but still not clear about it. Please help me with a few examples. How is it different from Absence Type?

    Hi Gopal ,
    Absences are very generic ones that we create which needs to be reflected in IT2001 and dedcution can happen.
    Absence Quotas are the limited entitlement that is fixed say ur eligible for 10 days sick leave so here this is a Quota for each year  and so becomes a Quota say Sick leave Quota =10 and will be seen in IT2006.
    Now an absence needs to be linked to this Quota for dedcution.
    An absence can be or may not be linked to a Quota.This depends on business Requirement.
    Let me know if u have further Questions.
    Thanks
    Swati

  • Confused about the default schema

    Hi,
    I am a little bit confused about the schema concept.
    I want to create a new schema called APP and then create several users and roles based on the schema APP. The default schema for the users should be APP achema.
    How can I make the schema APP the default schema for the new users that I am creating?
    I feel that there are some schema design concepts that I have to learn. Is there any resource on the internet that I can read and learn more about oracle schema design best practices?
    Any help would be appreciated,
    Ali

    A schema holds object definitions, and in the case of table & index objects the schema also holds the data.
    A user owns the schema.
    Therefore the user 'owns the definitions (including any functions, procedures, sequences, tabels, etc.)
    Other users may be granted access to some, or all, of the objects in a schema. This is done through the 'GRANT ...' command. For example, consider the following steps:
    1) create user app_owner
    2) create table object test owned by the app_owwner
    3) create user app_user
    4) grant select, update, insert and delete on app_owner's test table to app_user
    5) add synonyms to avoid needing to qualify the table's schema name.
    done as follows:
    oracle@fuzzy:~> sqlplus system
    SQL*Plus: Release 10.2.0.1.0 - Production on Mon Apr 3 20:07:32 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Enter password:
    Connected to:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Create the app owner userid. Note there is no need to ever log in to that user, even to create tables.
    SQL> create user app_owner
      2  identified by xyz
      3  account lock
      4  quota unlimited on users
      5  default tablespace users
      6  temporary tablespace temp;
    User created.
    Creating objects in a schema can be done by providing the schema name, or by switching schema in newer versions of Oracle
    SQL> create table app_owner.test ( t number );
    Table created.
    Create a userid that will access the table. Set that userid up to access the database and (for future) give it the capability to create it's own synonyms
    SQL> create user app_user
      2  identified by xyz
      3  temporary tablespace temp;
    User created.
    SQL> grant create session to app_user;
    Grant succeeded.
    SQL> grant create synonym to app_user;
    Grant succeeded.
    Now give the user access to the objects
    SQL> grant select, update, insert, delete on app_owner.test to app_user;
    Grant succeeded.
    Let's test it out. Insert by qualifying the schema name on the object, then create a synonym to avoid using schema, and try it all using the synonym
    SQL> connect app_user/xyz
    Connected.
    SQL> insert into app_owner.test values (4);
    1 row created.
    SQL> create synonym test for app_owner.test;
    Synonym created.
    SQL> insert into test values (3);
    1 row created.
    SQL> select * from test;
             T
             4
             3
    SQL>  Note that some people want to use PUBLIC grants and PUBLIC synonyms. This is a real bad idea if you want to ensure long term security of the data and want to host several different applications in the same Oracle instance.
    This, and a whole lot more, is in the 'Concepts' manual for your version of the database at http://docs.oracle.com

  • I am confused about something.  How do I read a book on my MacBook Pro?  I can't find the iBook app anywhere, which is what I use on my iPad.  The book I want to read is in my iTunes but I can't click on it.  My iBook library does not show up in iTunes.

    I am confused about something.  How do I read a book on my MacBook Pro?  I can't find the iBook app anywhere, which is what I use on my iPad.  The book I want to read is in my iTunes but I can't click on it.  Some of my iBooks show up in my iTunes but they are "grayed" out.  The only books that respond in iTunes are audiobooks and that's not what I'm looking for.  Is this a stupid question?

    Nevermind - I answered my own question, which is I CAN"T READ ANY BOOKS I purchased in iBooks on my MacBook Pro.  If I want to read on my mac I have to use Kindle or Nook.  Which means any book I've already purchased through iBooks has to be read on my iPad.  Kind of a drag because there are times when it's more convenient for me to read while I'm sitting with my Mac.

  • Regarding the product time capsule...is the modem the same as airport extreme and is the disk drive always running? I'm worried about it lasting for at least five years.

    Regarding the product Time Capsule... is the modem the same as the Airport extreme and is the disk drive always running??? I'm worried about it lasting at least five years.

    John,
    I'd pay good money to bet it wouldn't last 5 years... I don't rate the in built power supply and as for "server grade hard disk" - Hmmmm..... The failure rate of all HD's on the market after 3 year is 60%.
    Regards,
    Shawn

  • Now that I have backed up, can I delete items from my desktop? One of the reasons I got the time capsule was so that I could free up some space on my computer. I am confused about whether the backup will one day remove my photos/video

    I purchased a 2TB time capsule yesterday. I set it up as a router and did the backup no problem. I then navigated the backup folder and found that all my photos/video that I am nervous about losing are on there. So far so good.
    Taking a step back, the reason I bought the time capsule was 1.) I needed a router 2.) I have a mac and 3.) I am running out of disk space on that mac (I shoot and cut a lot of video and have years of high quality pictures on my mac hard drive)
    Can I now delete them from my mac computer to free space? I have used standard external disk drives in the past, but the whole "Back Up" piece of things has me confused. I love the idea of backing up my computer so I want to keep that functionality, but will the drive still function as a static external hard drive? Or do I need to move that material in seperately as a folder outside of the backup folder?
    I am nervous that if the backup overwrites information as the disk space becomes limited that in 10 years when I fill this drive up, that I will lose all of my photos that are part of the backups that I am running now.....
    Or worse, I am nervous that if I remove things from my mac right now, that the next time that a back up is performed that it will lose this data as it is not currently on the device I am backing up? How does this work?
    I apologize, the back up is a very new concept to me and I want to make sure I do not goof anything up.

    applefool wrote:
    Taking a step back, the reason I bought the time capsule was . . . 3.) I am running out of disk space on that mac
    That's an entirely different thing from backups.  A backup is an extra copy, in case the original is lost or damaged.   Additional space is just that -- more space for originals.
    While it's possible to use the same disk (such as the TC's internal HD) for both things, it's dangerous -- when (not if) something happens to the TC, you risk losing the originals that are on it.   To be safe, you need (at least) two copies of everything important, in (at least) two different places.  
    Many (including me) recommend at least three copies (originals plus 2 backups).  While it's not common for the Mac's hard drive to fail about the same time as the backup drive, it does happen.  There are several threads here where it did, and very expensive data recovery was needed, but in some cases everything was lost.
    So as the others recommend, getting an external HD for the stuff there isn't room for on your Mac is one solution for not having enough space.  But you might explore getting a larger internal HD.   If possible, that might be bettter.
    Then, also get another external, for "secondary" backups, so you're doubly protected.  If you get a portable model, you can take it offsite for even better protection.  See #27 in Time Machine - Frequently Asked Questions for details and some suggestions.
    Can I now delete them from my mac computer to free space? I have used standard external disk drives in the past, but the whole "Back Up" piece of things has me confused.
    You're not the first or only one. 
    There are different types of backup apps, so there are different answers for the different types.
    As the others have posted, Time Machine will, sooner or later, delete it's backup copies of things that are no longer on your system.  Depending on how long the original was there and when backups were run, that can be in as little as 24 hours, or as long as there's room.   So no, don't take the chance with data that's important!
    Is there helpful information on how to add an external drive to your backup set up?
    See the green box in #2 of the FAQ article.  All you have to do is format it for a Mac and remove it from the exclusion list.
               Once I set it up, will I need to leave the hard drive plugged into my mac in order for the data to be backed up?
    It can only be backed-up while it's connected.
               If I do, and a back up is performed without the hard drive attached to my computer, will it remove the backup of  what was on the hard drive
    No (unless you leave it disconnected until Time Machine starts deleting old backups).
    It will back up the external when it's connected, and not complain if it isn't.

Maybe you are looking for