A little clarification on URLVariables?

I'm not entirely sure how to use it / what's the best
approach here.
I have a whole bunch of form fields that I need to send up to
my online processing script - a couple of hundred or so.
Am I right in thinking that the URLVariables constructor
simply takes a querystring-style string (ie: name=value pairs,
separated by ampersands)? So I could do something like:
quote:
var sURLVars = "action=update"
for( var i = 0; i < CONST_NEWS_ITEM_COUNT; i++ ) {
sElementId = "txtTitle"+i;
sElementValue = document.getElementById(sElementId).value;
sURLVariables += '&' + sElementId + '=' + sElementValue;
var oVariables = new air.URLVariables(sURLVariables);
Cheers :)
ps. Yes I've read the docs -
http://help.adobe.com/en_US/AIR/1.1/jslr/index.html
- it's still not clear to me what I can do with it...

Hi,
You are right.
But you could also simply create a URLVariable object with an
empty constructor and set properties in them by saying something
like oVariable[sElementId] =
document.getElementById(sElementId).value;

Similar Messages

  • Need clarification regarding select query

    Hi,
    I need a little clarification regrding a Select senario
    I want to select data from table which have been minupulated between a certian date like between 1-DEC-10 to 31-DEC-10 and note that table does not have any time/date column. I've applied the following query to do this.
    select * from TABLE_NAME where sysdate between to_date('01-DEC-10') AND to_date('31-DEC-10');
    Would it work fine because I've tried it against a table and it returned me nothing however DML occur between time period.
    Regards,
    Abbasi

    Abbasi wrote:
    Hi,
    I need a little clarification regrding a Select senario
    I want to select data from table which have been minupulated between a certian date like between 1-DEC-10 to 31-DEC-10 and note that table does not have any time/date column. I've applied the following query to do this.
    select * from TABLE_NAME where sysdate between to_date('01-DEC-10') AND to_date('31-DEC-10');
    Would it work fine because I've tried it against a table and it returned me nothing however DML occur between time period.
    Regards,
    AbbasiAFAIK without log mining and auditing this is not possible.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm

  • Weblogic - Fusion Middleware Cluster - Clarification - 11g

    Hello All,
    I've checked the forum and would need just a little clarification regarding weblogic cluster & fusion Middleware + Load Balancing.
    My current Setup is:
    ==============
    1 APP Server: Runs Fusion Middleware 11g Forms and Reports + Weblogic + OHS (windows 2008R2)
    1 DB Server: Standalone database 11gR2 installation (windows 2008R2)
    I've been asked to add a new app server and setup load balancing between the two app servers in order to split the incoming forms & reports requests.
    What do I do? Do I install the new app server like the first one and just add it to an existing domain(clustering)? Is there a difference between fusion middleware clustering and weblogic clustering (as in database RAC - the whole idea being failover 24/7 no single point of failure?)
    I would appreciate any assistance, and need not great detail, just need to make sense of the whole thing.
    I'm already on with the following document: [http://docs.oracle.com/cd/E21764_01/web.1111/e13709/toc.htm]
    Thanks in advance
    Jan S.

    Hi Jan
    You are almost there. I would like to clarify some points for you.
    1. You already have WLS + Forms/Reports and of course Backend RCU Database that has all the meta data for Forms/Reports. This RCU database is on single server Oracle database.
    I've been asked to add a new app server and setup load balancing between the two app servers in order to split the incoming forms & reports requests.
    2. The first question for above requirement is, do you need the new Server on the same Existing physical box or a different remote physical box. If on the same physical box (which I doubt), then there is NO need to install any new software like WLS, Forms/Reports etc. All you do is create a new Managed Server and add it to your existing Domain. NOW, do you currently have a Clustered Domain. If you, all you do is add this new server to that cluster. If not, then you need to add a New cluster and add your old and this NEW managed server to that Cluster. If your existing domain is only single server and is very recent, I would prefer create a brand new Cluster domain with Different Port numbers (you can change port numbers anytime later also). Then add/create one or two managed servers.
    3. If you are give a new physical box for this new managed server, then YES, you do need to install exactly SAME version of Weblogic Server and same version of Forms/Reports server on this new machine. Then create a Cluster Domain with 2 managed servers. One managed server on first box where you will have Admin Server + Managed Server 1. Then run pack command. Go to other machine. Run unpack command to create second Managed Server. Now you got 2 managed servers in a Single Clustered Domain.
    4. Now comes the Load Balancing Part. Never ever expose the individual managed server host and port and WebApp urls for any Clustered Domain. Instead install Apache Http Web App Server which I guess is same as Oracle Http Server. Oracle renamed it (there is no need for this, but it may be company policy). Anyhow all you do is, just install Apache Web Server. Copy some .dll (.so on linux) files to this apache installation. Then update httpd.conf file or weblogic.conf file with details of clustered weblogic domain managed servers host, port, url patterns etc.
    5. All the External Requests will now go to Apache Web Server like http://apachehost:apacheport/xxxUrl. This Apache will take care of LOAD Balancing, Failover etc and based on input url pattern will redirect requests to back end cluster of weblogic managed servers.
    6. The backend RCU database that has Metadata for Forms/Reports NEED NOT be Clustered. If you really have extremely heavy data, then you can have Oracle RAC database. Have multiple DB Nodes in RAC. For each Node, create a DataSource in Weblogic conolse and finally one master Multi Data Soiurce that uses all these single Datasources. But very rarely you really need RAC or clustered database for RCU. So for now, just go with single non-clustered Database that has RCU Schemas.
    In conclusion, even though you have Forms/Reports, just follow the standard normal Weblogic Clustered domain Architecture for your requirement. All clustering is at application server level and nothing to do with any Oracles softwares like SOA/BPM, Forms/Report, BI etc etc.
    Thanks
    Ravi Jegga

  • CIS Scanning to folder setup - Clarification please

    Hi
    I'm very new to Xerox network scanning and am hoping for a little clarification on the setup process.
    I'm looking at setting up scanning for individual members of staff on a WorkCentre 7556.
    At the moment there are 4 Templates set up under 'Scan' in Centreware Internet Services (which I'll call CIS if it's ok...).
    Each is for a department and is scanning to a share on a machine on the same subnet using smb.
    There are no file repositories set up under CIS Properties->Services->Workflow Scanning->File Repository Setup but the scanning works fine (except that the share host keeps shutting down).
    Everywhere I look up scanning in the documentation / on the web it seems to always start off talking about setting up Workflow Scanning  File Repositories.
    What I'm wondering is, what essentially is the difference between these and the way it's been done previously by just setting up a Scan template using smb with server, share, and authentication credentials please?
    Many thanks

    Hi Pauliolio,
    Thank you for using the Support Forum. The difference is whether you are using FTP vs SMB. Since you are using SMB the repository is a shared directory which you must have set up since your scan is working.  If you have additional questions please consider contacting your support centre for further assistance.

  • Import statement and directory structure

    First of all, sorry for such a long post, I believe part of it is because I am unsure of the concept of importing in Java. Secondly, Thanks to anyone who can ultimately enlighten me to the concept of import. I did ask this question before in the "erorr and error handling" forum, and the people who have helped me there did a great job. But, I believe I require a little more clarification and thus have decided to post here.
    Anyhow, my question..
    Could someone explain to me the concept of the import statement, or direct me to a webpage with sort of explanation for newbies? For some reason, I am having a hard time grasping the concept.
    As I understand it, the import statement in Java, is very similar to the namespace keyword in C. That is to say, import doesn't actually "import" any source code, the way that the #include statement does in C.
    So I suppose what my question is, say I have a java class file like below:
    //filename: sentence.java
    //located: c:\school\csc365
    package csc365;
    class sentence
    //some variables here..
    //some constructor here..
    //some methods here..
    And some sample program like the one below which implements the above..
    //filename: test.java
    //located: c:\school\csc365
    import csc365.*;
    import java.io.*;
    class test.java
    //creates some sentence object
    //uses the object's methods
    //some other things.
    As I understand it, the test.java file should not compile because the csc365 package is not in the correct directory. (assuming of course, the classpath is like c:\school\csc365;c:\school )
    But, ... where then should the sentence.java be located? In a subdirectory of c:\school called csc365 (i.e c:\school\csc365\) ?
    And thus that would mean the test.java file could be located anywhere on the hard drive?
    I suppose, I just need a little clarification on the correlation between a package's "name" (i.e package csc365; ) and its corresponding directory's name, and also how the javac compiler searches the classpath for java classes.
    ..So, theoretically if I were to set the classpath to look in every conceivable directory(provided the directory names were all unique) of the harddrive, then I could compile a test.java anywhere?
    As a note: I have been able to get the test.java file to compile, by leaving out the import statement in the test.java file, and also leaving out the package statement for the sentence class, but I assume this is because the files are defaulted to the same package?

    Hi Mary,
    No, import isn't analogous to C++ namespace - Java package is closer to the namespace mark.
    import is just a convenience for the programmer. You can go your whole Java career without ever writing an import statement if you wish. All that means is that you'll have to type out the fully-resolved class name every time you want to use a class that's in a package other than java.lang. Example:
    // NOTE: No import statements
    public class Family
       // NOTE: fully-resolved class names
       private java.util.List children = new java.util.ArrayList();
    }If you use the import statement, you can save yourself from typing:
    import java.util.ArrayList;
    import java.util.List;
    public class Family
       // NOTE: fully-resolved class names
       private List children = new ArrayList();
    }import isn't the same as class loader. It does not bring in any source code at all.
    import comes into play when you're compiling or running your code. Java will check to make sure that any "shorthand" class names you give it live in one of the packages you've imported. If it can't find a matching fully-resolved class name, it'll give you a message like "Symbol not found" or something like that.
    I arrange Java source in a directory structure that matches the package structure in the .class files.
    If I've got a Java source file like this:
    package foo.bar;
    public class Baz
       public static void main(String [] args)
            Baz baz = new Baz();
            System.out.println(baz);
       public String toString()
           return "I am a Baz";
    }I'll store it in a directory structure like this:
    root
    +---classes
    +---src
          +---foo
               +---bar
                    +---Baz.javaWhen I compile, I go to root and compile by typing this:
    javac -d classes foo/bar/*.javaI can run the code from root by typing:
    java -classpath classes foo.bar.BazI hope this wasn't patronizing or beneath you. I don't mean to be insulting. - MOD

  • A quick primer on audio drivers, devices, and latency

    This information has come from Durin, Adobe staffer:
    Hi everyone,
    A  common question that comes up in these forums over and over has to do  with recording latency, audio drivers, and device formats.  I'm going to  provide a brief overview of the different types of devices, how they  interface with the computer and Audition, and steps to maximize  performance and minimize the latency inherent in computer audio.
    First, a few definitions:
    Monitoring: listening to existing audio while simultaneously recording new audio.
    Sample: The value of each individual bit of audio digitized by the audio  device.  Typically, the audio device measures the incoming signal 44,100  or 48,000 times every second.
    Buffer Size: The  "bucket" where samples are placed before being passed to the  destination.  An audio application will collect a buffers-worth of  samples before feeding it to the audio device for playback.  An audio  device will collect a buffers-worth of samples before feeding it to the  audio device when recording.  Buffers are typically measured in Samples  (command values being 64, 128, 512, 1024, 2048...) or milliseconds which  is simply a calculation based on the device sample rate and buffer  size.
    Latency: The time span that occurs between  providing an input signal into an audio device (through a microphone,  keyboard, guitar input, etc) and when each buffers-worth of that signal  is provided to the audio application.  It also refers to the other  direction, where the output audio signal is sent from the audio  application to the audio device for playback.  When recording while  monitoring, the overall perceived latency can often be double the device  buffer size.
    ASIO, MME, CoreAudio: These are audio driver models, which simply specify the manner in which an audio application and audio device communicate.  Apple Mac systems use CoreAudio almost exclusively which provides for low buffer sizes and the ability  to mix and match different devices (called an Aggregate Device.)  MME  and ASIO are mostly Windows-exclusive driver models, and provide  different methods of communicating between application and device.  MME drivers allow the operating system itself to act as a go-between and  are generally slower as they rely upon higher buffer sizes and have to  pass through multiple processes on the computer before being sent to the  audio device.  ASIO drivers provide an audio  application direct communication with the hardware, bypassing the  operating system.  This allows for much lower latency while being  limited in an applications ability to access multiple devices  simultaneously, or share a device channel with another application.
    Dropouts: Missing  audio data as a result of being unable to process an audio stream fast  enough to keep up with the buffer size.  Generally, dropouts occur when  an audio application cannot process effects and mix tracks together  quickly enough to fill the device buffer, or when the audio device is  trying to send audio data to the application more quickly than it can  handle it.  (Remember when Lucy and Ethel were working at the chocolate  factory and the machine sped up to the point where they were dropping  chocolates all over the place?  Pretend the chocolates were samples,  Lucy and Ethel were the audio application, and the chocolate machine is  the audio device/driver, and you'll have a pretty good visualization of  how this works.)
    Typically, latency is not a problem if  you're simply playing back existing audio (you might experience a very  slight delay between pressing PLAY and when audio is heard through your  speakers) or recording to disk without monitoring existing audio tracks  since precise timing is not crucial in these conditions.  However, when  trying to play along with a drum track, or sing a harmony to an existing  track, or overdub narration to a video, latency becomes a factor since  our ears are far more sensitive to timing issues than our other senses.   If a bass guitar track is not precisely aligned with the drums, it  quickly sounds sloppy.  Therefore, we need to attempt to reduce latency  as much as possible for these situations.  If we simply set our Buffer  Size parameter as low as it will go, we're likely to experience dropouts  - especially if we have some tracks configured with audio effects which  require additional processing and contribute their own latency to the  chain.  Dropouts are annoying but not destructive during playback, but  if dropouts occur on the recording stream, it means you're losing data  and your recording will never sound right - the data is simply lost.   Obviously, this is not good.
    Latency under 40ms is  generally considered within the range of reasonable for recording.  Some  folks can hear even this and it affects their ability to play, but most  people find this unnoticeable or tolerable.  We can calculate our  approximate desired buffer size with this formula:
    (Sample per second / 1000) * Desired Latency
    So,  if we are recording at 44,100 Hz and we are aiming for 20ms latency:   44100 / 1000 * 20 = 882 samples.  Most audio devices do not allow  arbitrary buffer sizes but offer an array of choices, so we would select  the closest option.  The device I'm using right now offers 512 and 1024  samples as the closest available buffer sizes, so I would select 512  first and see how this performs.  If my session has a lot of tracks  and/or several effects, I might need to bump this up to 1024 if I  experience dropouts.
    Now that we hopefully have a pretty  firm understanding of what constitutes latency and under what  circumstances it is undesirable, let's take a look at how we can reduce  it for our needs.  You may find that you continue to experience dropouts  at a buffer size of 1024 but that raising it to larger options  introduces too much latency for your needs.  So we need to determine  what we can do to reduce our overhead in order to have quality playback  and recording at this buffer size.
    Effects: A  common cause of playback latency is the use of effects.  As your audio  stream passes through an effect, it takes time for the computer to  perform the calculations to modify that signal.  Each effect in a chain  introduces its own amount of latency before the chunk of audio even  reaches the point where the audio application passes it to the audio  device and starts to fill up the buffer.  Audition and other DAWs  attempt to address this through "latency compensation" routines which  introduce a bit more latency when you first press play as they process  several seconds of audio ahead of time before beginning to stream those  chunks to the audio driver.  In some cases, however, the effects may be  so intensive that the CPU simply isn't processing the math fast enough.   With Audition, you can "freeze" or pre-render these tracks by clicking  the small lightning bolt button visible in the Effects Rack with that  track selected.  This performs a background render of that track, which  automatically updates if you make any changes to the track or effect  parameters, so that instead of calculating all those changes on-the-fly,  it simply needs to stream back a plain old audio file which requires  much fewer system resources.  You may also choose to disable certain  effects, or temporarily replace them with alternatives which may not  sound exactly like what you want for your final mix, but which  adequately simulate the desired effect for the purpose of recording.   (You might replace the CPU-intensive Full Reverb effect with the  lightweight Studio Reverb effect, for example.  Full Reverb effect is  mathematically far more accurate and realistic, but Studio Reverb can  provide that quick "body" you might want when monitoring vocals, for  example.)  You can also just disable the effects for a track or clip  while recording, and turn them on later.
    Device and Driver Options: Different  devices may have wildly different performance at the same buffer size  and with the same session.  Audio devices designed primarily for gaming  are less likely to perform well at low buffer sizes as those designed  for music production, for example.  Even if the hardware performs the  same, the driver mode may be a source of latency.  ASIO is almost always  faster than MME, though many device manufacturers do not supply an ASIO  driver.  The use of third-party, device-agnostic drivers, such as  ASIO4ALL (www.asio4all.com) allow you to wrap an MME-only device inside a  faux-ASIO shell.  The audio application believes it's speaking to an  ASIO driver, and ASIO4ALL has been streamlined to work more quickly with  the MME device, or even to allow you to use different inputs and  outputs on separate devices which ASIO would otherwise prevent.
    We  also now see more USB microphone devices which are input-only audio  devices that generally use a generic Windows driver and, with a few  exceptions, rarely offer native ASIO support.  USB microphones generally  require a higher buffer size as they are primarily designed for  recording in cases where monitoring is unimportant.  When attempting to  record via a USB microphone and monitor via a separate audio device,  you're more likely to run into issues where the two devices are not  synchronized or drift apart after some time.  (The ugly secret of many  device manufacturers is that they rarely operate at EXACTLY the sample  rate specified.  The difference between 44,100 and 44,118 Hz is  negligible when listening to audio, but when trying to precisely  synchronize to a track recorded AT 44,100, the difference adds up over  time and what sounded in sync for the first minute will be wildly  off-beat several minutes later.)  You are almost always going to have  better sync and performance with a standard microphone connected to the  same device you're using for playback, and for serious recording, this  is the best practice.  If USB microphones are your only option, then I  would recommend making certain you purchase a high-quality one and have  an equally high-quality playback device.  Attempt to match the buffer  sizes and sample rates as closely as possible, and consider using a  higher buffer size and correcting the latency post-recording.  (One  method of doing this is to have a click or clap at the beginning of your  session and make sure this is recorded by your USB microphone.  After  you finish your recording, you can visually line up the click in the  recorded track with the click in the original track by moving your clip  backwards in the timeline.  This is not the most efficient method, but  this alignment is the reason you see the clapboards in behind-the-scenes  filmmaking footage.)
    Other Hardware: Other  hardware in your computer plays a role in the ability to feed or store  audio data quickly.  CPUs are so fast, and with multiple cores, capable  of spreading the load so often the bottleneck for good performance -  especially at high sample rates - tends to be your hard drive or storage  media.  It is highly recommended that you configure your temporary  files location, and session/recording location, to a physical drive that  is NOT the same as you have your operating system installed.  Audition  and other DAWs have absolutely no control over what Windows or OS X may  decide to do at any given time and if your antivirus software or system  file indexer decides it's time to start churning away at your hard drive  at the same time that you're recording your magnum opus, you raise the  likelihood of losing some of that performance.  (In fact, it's a good  idea to disable all non-essential applications and internet connections  while recording to reduce the likelihood of external interference.)  If  you're going to be recording multiple tracks at once, it's a good idea  to purchase the fastest hard drive your budget allows.  Most cheap  drives spin around 5400 rpm, which is fine for general use cases but  does not allow for the fast read, write, and seek operations the drive  needs to do when recording and playing back from multiple files  simultaneously.  7200 RPM drives perform much better, and even faster  options are available.  While fragmentation is less of a problem on OS X  systems, you'll want to frequently defragment your drive on Windows  frequently - this process realigns all the blocks of your files so  they're grouped together.  As you write and delete files, pieces of each  tend to get placed in the first location that has room.  This ends up  creating lots of gaps or splitting files up all over the disk.  The act  of reading or writing to these spread out areas cause the operation to  take significantly longer than it needs to and can contribute to  glitches in playback or loss of data when recording.

    There is one point in the above that needed a little clarification, relating to USB mics:
    _durin_ wrote:
     If  USB microphones are your only option, then I would recommend making  certain you purchase a high-quality one and have an equally high-quality  playback device.
    If you are going to spend that much, then you'd be better off putting a little more money into an  external device with a proper mic pre, and a little less money by not  bothering with a USB mic at all, and just getting a 'normal' condensor  mic. It's true to say that over the years, the USB mic class of  recording device has caused more trouble than any other, regardless.
    You  should also be aware that if you find a USB mic offering ASIO support,  then unless it's got a headphone socket on it as well then you aren't  going to be able to monitor what you record if you use it in its native  ASIO mode. This is because your computer can only cope with one ASIO device in the system - that's all the spec allows. What you can do with most ASIO hardware though is share multiple streams (if the  device has multiple inputs and outputs) between different software.
    Seriously, USB mics are more trouble than they're worth.

  • Why do images within a PDF look jagged when viewed in Acrobat Pro 10.1.6?

    Using a MacBook Pro running Mac OS version 10.7.5 with NVIDIA GeForce GT 650M 1024 MB graphics card. High-resolution source images look fine in other software, and embedded images in a PDF look fine when viewed on Google Drive, but curves and diagonal lines look jagged when viewed in Acrobat Pro 10.1.6. Checking smoothing options in Preferences only adjusts the position of the jagged edges; it neither enhances nor ameliorates them.
    I created a sample PDF to illustrate the problem I'm having. Created in illustrator, exported to PNG and converted to PDF. Here is a screenshot of the source PNG side-by-side with the PDF I created from it:
    Please note that when I preview the PDF in Finder, and when I view the PDF in Chrome using Google Drive, the image looks fine. Here's a link to the sample PDF:
    http://www.sendspace.com/file/77f5m6
    Any assistance is appreciated. Thanks!

    Unfortunately, as I mentioned in my original post, I've already tried toggling smoothing options. They don't make the jagged edges go away—they just change them slightly. In some cases, as with the sample file provided, it does help smooth the art out, but it never looks as good as it did going in, in a recent project, toggling smoothing options didn't affect the appearance of the image at all. Zoom does not appear to relieve the artifacting.
    A little clarification: this PDF was created from a flat PNG, but I have had this issue with PNGs and TIFFs as well, generated by both Photoshop and Illustrator. To reiterate, the PDFs display fine in other software—Finder preview looks great, and the same file viewed in my browser via Google Drive looks as intended as well. A colleague was able to replicate this issue in Acrobat on his Windows machine. This appears to be a rendering issue specific to Acrobat with regards to the files I use it to create.

  • IPhoto 2: suddenly won't open library...

    ...and nor will it make a new one.
    G5, iPhoto 2, Tiger. iPhoto worked just fine yesterday. Today, Software Update reports two updates to d/l and install: Quicktime 7.1.2 and OS X Update 10.4.7. (nothing to do with iPhoto!) Downloaded and installed.
    Nothing else has been changed.
    After the obligatory restart, iPhoto was launched to work on some pics. The app loads, then reports that 'you have made a change in the iPhoto library using a newer version. Please use a newer version to view the iPhoto library.' and a Quit button.
    i searched through this forum, and there were some ideas:
    • rename the current Library, open iPhoto and create a new Library, quit and manually move the old Library into the new Library, then restart iPhoto
    • trash the plist from the user directory
    • rebuild Library
    Tried each of them, then tried all of them. Nothing changes. If the Library is removed from the directory to the desktop and renamed, a new Library dialog did appear. But when the user is prompted to save the new Library, the same error comes up.
    New Library folder trashed; plist trashed; iPhoto started again: same as above.
    New Library folder trashed; plist trashed; Opt-Shft held as iPhoto opens: well, of course, it says no album is selected, then gives the original error and quits.
    Opt-Shft held with old Library in the proper location: asks if a rebuild is desired, then gives the original error and quits.
    At a total loss. We don't have iLife, because we don't want the rest of it. We just use iPhoto. Can't re-install iPhoto because it was part of the original restore discs -- well, okay, i could extract iPhoto from the package and reinstall just it, but i'd rather not because it's a bit of work.
    What's a stumper is that nothing was done to iPhoto itself with this download. At this point in the troubleshooting, i usually tell the user that a clean reinstall from the original discs is needed.
    Has anyone else seen this problem with iPhoto 2 after a Tiger update?
    iBook   Mac OS X (10.4.6)   G5, 10.4.7, iPhoto 2

    A little clarification please...
    As far as I can tell, the latest available Mac version of QuickTime is 7.1.1 (although that shouldn't have anything to do with the problem you've described).
    It's a little puzzling that a newly created library would induce the "you have made a change..." dialog. While an existing library might have become modified by applying an inappropriate updater, there is no possible way for iPhoto to generate a new, incompatible library. To where did you try to save this new library? Did you just accept the default location (your Pictures folder) that was offered?
    Restart your Mac and then run the Permissions Repair from Disk Utility. Try iPhoto again.
    If that doesn't improve things, try creating a new library this way...
    • Quit iPhoto if it's running
    • Move your iPhoto Library (all of them, if you've rebuilt and kept the original) to the desktop
    • Move the com.apple.iPhoto.plist file from your Preferences folder to the trash
    • Launch iPhoto and dismiss the "Welcome" dialog.
    Try importing some pictures (but not from any of your previous iPhoto Library folders)
    BTW, it's unlikely that your OS 10.4.7 update had anything to do with what's going on (other than a possible permissions issue) -- I just installed such an update, and iPhoto 2 works fine.

  • Problem with new DB app, report+form, report works great, form says ORA-01403: no data found

    I have a new table, the PK is a varchar2(5) column, when I allow the default query in the report to do its work, I get all the expected data.  when I click on the edit icon (pencil), I get an error screen indicating ORA-01403: no data found.  I'm hosed!  This was generated by the app!  no changes were made to anything in the app, except to turn off tabs at create time.  I even left the default name.
    My ARF is hitting the right table with the PK column, but finds nothing.  I have the "success" message showing me the PK value.  What could be going on here, and how can it be addressed?  Today is the 1st time I have seen this matter.
    I'm running 4.22 as the workspace admin, I have other apps that work fine (to expectation), my browser is FF22, though I plan a downgrade to 18.  Our DB is 11.1.0.7.

    Jorge, thanks for your attention to my problem, I appreciate any insights, although there is a little clarification I can offer.  Also, if you can, please remind me the tags to use in my text that would properly set off the code snippets or prior message content?
    [you wrote]
    You said you have a "success" message showing you the PK value. Can you elaborate on this?
    The form page, under the ARF, allows for the display of a "success" message and a "failure" message.  I have seen my "success" message appear, but it didn't show my key field as a brought-back value (which I told it to include), and I think now this is not relevant any longer.  I found that there was a link on the report attributes page between #ROWID# and a P2_ROWID that was incorrect (probably from an earlier stage of dev in the app), and I changed this to my key field, and this altered the outcome of the ARF action.  This leads to ....
    [you wrote]
    Can you see the correct PK value in the URL? Does the item parameter match what you expect (correct page item and value)? Perhaps share that full URL here?
    I have expected values in my URL.  The URL does show my key value (tail end of URL underlined here):
    ../apex/f?p=120:2:7519563874482::NO::P2_VCODE:RB15
    [you wrote]
    Debug the page and see which process, item or step is actually failing. You could be running some other process on the form page and that could be what actually fails.  Treat it as if the ARF works correctly and see what else could be happening.
    I can add the detail that my 1st message was based on testing with a table where I set the PK as data type VARCHAR2, but in more testing on the actual app (whose URL piece is above) I am using a PK which is CHAR.
    The result of the debug effort is that APEX has built its own query for pulling back the row in the ARF, and it is joining on my PK field to an APEX item P_ROWID which I don't think I created.  Nor does it appear to offer me any avenue for correcting it.    debug snippet:    where "VCODE" = :p_rowid; end;

  • My iPhone died and I got a new one. How can I get my music in iTunes back? And no, I didnt sync it with my mac or anywhere else....

    I use iCloud....

    Hello, tiburon1979.  
    Thank you for visiting Apple Support Communities. 
    I would need a little clarification on the issue that you are experiencing to provide a better answer.  However, if your device is not recognized by iTunes, try going through the steps in the article below.  
    iPhone, iPad, or iPod not recognized in iTunes for Windows
    If the device is recognized and you are unable to select the syncing preference, see the article below as the steps have changed.  
    Sync your iPhone, iPad, and iPod with iTunes using USB
    Cheers, 
    Jason H.  

  • Lion Server without a Static IP - Worth it?

    I'm running a small video production company, and I'm considering setting up my iMac with Lion Server for use in organizing productions.  I love the idea of having calendars that multiple people can update and expand upon, a wiki or custom website for sharing progress and updates, shared contacts for keeping track of cast and crew, distributing files like scripts and footage, and eventually setting it up to host my website and company email.
    My problem is that I don't have a static IP, and from what I've found, I can't afford one right now.  That being said, I'm fine holding off on the webhosting and email for now, and I imagine I'll lose the ability to do push notifications as well, but I'm still interested in the system.  Having the calendars and contacts update whenever the employee logs into the local network at the office would work for us.  But I wanted to check: is that how it would go down?  It would sync the info when each device logged onto our network and then they could go about their merry way, or is it more complicated than that?
    I'm fairly technically savvy, (I work part-time as a web designer, and I actually work at an Apple store as well), so I imagine I can handle the setup and such.  I'm just curious as to how much of my desired functionality will even work with the "update whenever you enter the network" pattern.  Is that how it would go down, or is it more dependant on a static IP, even for local-network use?
    -Nerrolken

    Linc is right, but I'd add a little clarification.
    Lion Server does want a static IP address. It's perfectly happy if that address is on your LAN. Make sure it's on the same subnet as the LAN (Ethernet) side of your Internet router/gateway/access point. Configure the router to reserve a static IP for your server--so that a DHCP query will give your iMac the same IP every time--or narrow the range of IPs the router doles out to exclude the static address you assign your iMac.
    If the day comes when you do want to publish some services to the Internet, configure port forwarding for those services in your Internet router and, as Wittless said, sign up for DynDNS or a similar service so your users can find you. Lion Server handles all of this automatically if you use AirPort Extreme or Time Capsule, but it's almost as easy to manage with non-Apple network gear.
    Best of luck.

  • Adding a new field on ESS screen.

    Hello Experts,
    Just need a  suggestion on how to a add a check box to the ESS view.
    After reimporting the model basically, I want to know where do I have to edit the model bindings.
    The RFC, we have added a custom field in the struchure  HRBEN00_FSACONTRIB_TRANS  of RFC HR_BEN_ESS_RFC_OFFER_DETAILS.
    The view, I am tryign to add the field is to DetailFSAContributionView.
    I just want to get little clarification on communication between view, interface controller, component controller and the model.
    In a web dynpro application binding is between view -> controller n controller -> model. Looks like in the FPM application its different.
    So, if anyone can let me know how to do the binding and any additional steps needed to be added to code inorder to intiate the values?
    You help is highly appreciated.
    Thanks,
    James

    HI,
    to add a check box, do it the layout if u want to do at design time,
    or write code in the wdDoModifyView to create a check box if u want t o do it dynamically...
    coming to the binding,
    first check how the binding is done in the data modeller,
    if it is view>componentcontroller->interfacecontroller
    and model-->component controller,
    reimport the model (as u have modified the structure)
    first, apply template(service controller) to the component controller  (binding takes place b/w the component controller and the model
    then, bind the view to component controller and then component controller to the interface controller.
    Regards,
    Satya.

  • Difference among Model node creation, model attribute creation and the field creation in database through AET?

    Hello Friends,
    To display the field on the View, we can create the field via 3 ways:
    1) By creating the model attribute
    2) By creating the model node using the GENIL object and using its attributes in view
    3) Creating the field in Database structure using AET
    But, i am not aware that in exactly what kind of business scenarios we use the above 3 methods to create the field.
    Could you help me out to clarify the same.

    Hi Dev,
    1). By creating the model attribute: we will use this option in case of the field is avaibla in stanadrd sap system. It might be in another under child context nodes and we will use BOL relations to access the attribute.
    2).  By creating the model node using the GENIL object and using its attributes in view: We will use this option incase of there is no standard provision to use. Means you want to create any custom assignment block with custom attributes then you can go for this option. More over we can use AET to create custom genil object. We have an option Create Table in AET.
    3).Creating the field in Database structure using AET: We will use this option incase of there is no attribute in database relevant to your requirement then we will go for AET enhancement.
    Hope this might give you little clarification..
    Best Regards,
    Dharmakasi.

  • I bought my Iphone 5 a week ago, now suddenly my sound is gone when i play games, i checked everything and the weird thing is that when i put my headphone in i CAN hear the sound when in games, also he just plays music whitout my headphone. What could it

    Hello so as you can see in my question i have troubles with my sound
    I cant hear annything of my sound in game and i did check everything
    The weird thing is that when i put my headphone in i do hear the sound
    Also my music plays without my headphone
    Wat can it be ? Pls help !

    Ok, a little clarification then, you can hear music fine with the speaker and headphone?  You can hear a game's music fine with the headphone but not with the speaker?  Do you get the notification sounds ok with the speaker?
    If the speaker sound is acceptable with everything except the game then there is a problem with that game and its settings.
    If that is the case, you might want to try deleting the game and then downloading and reinstalling the game to see if that helps the problem.

  • I restored my phone and now on iTunes it won't let me put anything back on the phone and where it says iPhone there is nothing there

    I just updated my iPhone to the 5 iOS and everything seemed to be ok i still have
    my contacts but when I plug in to iTunes the page which normally had all the info for the phone is blocked and nothing comes up and I can't put my music back on it or my apps . Please help me ASAP as I don't even know how to sync it anymore.
    Thank you
    Rebecca

    Hi, CrystalLuna.  
    Thank you for visiting Apple Support Communities.  
    It would need a little clarification on this issue to provide a better answer.  However, if your device will not turn on or is unresponsive, take a look at the troubleshooting steps in the article below.  
    If your iPhone, iPad, or iPod touch doesn't respond or doesn't turn on
    If your device will power on but the display is blank, Screen Curtain may be enabled.  Take a look at the steps in the article below to disable this feature.  
    Turn the screen curtain on or off. Triple-tap with three fingers. When the screen curtain is on, the screen contents are active even though the display is turned off.
    Use iPhone with VoiceOver
    -Jason H.  

Maybe you are looking for

  • Jython scripts fails from cmd line Error: no domain or domain template...

    --------------script--------------- connect('weblogic','welcome1','t3://obi5.mnapps.state.mn.us:7101',adminServerName='AdminServer'); print 'Connecting to Domain ...' try:           domainCustom() except:           print 'Already in domainCustom' cd(

  • Missing Grid!!

    Hi guys, my grid is missing! i'm currently working on a game, and tried adding menu control buttons; the menu buttons work well, but my grid is now missing!! This is really both funny and frustrating.. can anyone tell me what's missing with my code?

  • UWL substitution with multiple systems

    Hi all, I need some help in understanding how an UWL substitution works with multiple systems. We are using ESS, MSS and SRM workflows and use UWL for all workflow scenarios. All users are using ESS, all managers are using ESS, MSS and SRM. In the UW

  • Added diskpace doesn't show up in Disk Utility

    My Problem: I can't get my added disk space to show up on my RAID volume. Here's what I have done so far: I just added 3 drives to my upper controller. I now have 7 drives, each are 250 GB for a total of 1.37 TB of disk space. The 7 drives are config

  • Multiplayer game: how to synchronize client and server

    Hi there, I'm wondering how to synchronzie client's and server's frame number? I have a multiplayer spaceship(2D) game running at constant 30 FPS, since I use "frame number" to determine if a packet is too late or not, so I would like to have server