Apple TV 3 buffer size?

I currently have an HTPC that I want to replace with the ATV3. I rip all my Blu-Ray and DVD disks and encode them in x264 using Handbrake. This keeps my toddler from destroying my media (especially since many of them are Disney movies) and keeps all our movies easily accessable.
Since the ATV3 uses an A5 CPU, that means it is limited to L4.0 of x264, which means my files max bitrate can only be 25mb. I use variable bitrates which I currently have capped at 40mb (as to not exceed the source). I plan to change up my future encodes to limit the max bitrate to 25mb, but when you specify a max bitrate you need a buffer size as well.
So the question becomes, what is the hardware buffer size for the new ATV3? I would assume it is more than 25mb, but I want to be sure before I start doing more encodes so I do not run into playback problems.

Tree Dude wrote:
I am not sure what I said that was against the T&Cs. Backing up disks I purchased is completely legal and a very common use for these types of devices.
Apple Support Communities Terms of Use
Specifically 2 below:
Keep within the Law
No material may be submitted that is intended to promote or commit an illegal act.
Do not submit software or descriptions of processes that break or otherwise 'work around' digital rights management software or hardware. This includes conversations about 'ripping' DVDs or working around FairPlay software used on the iTunes Store.
Do not post defamatory material.
Your usage to any sane person constitutes 'fair use'.  Specific laws regarding this kind of thing vary from country to country, but Apple tend to frown on such discussions - their rules not ours.
If you bend the rules, your posts may get deleted.  Trust me, been there, had posts deleted in the past.
AC

Similar Messages

  • How to set buffer size mac?

    Video streaming is intermittent, BBC iPlayer.
    Cannot get info about router but have latest Apple Extreme modem in MacBook Air.

    A DAQboard is an IOtech device. I use a DAQbook in one of my programs.
    You set the buffer size by calling Acquisition Scan Configuration.vi, assuming you've downloaded the IOtech enhanced LabVIEW drivers. The buffer size input to this VI is in number of scans. There's no real rule to the size you should use. Just set it to a size that's big enough to hold as many scans as you will need buffered before you read them. For example, If the board is set up to sample at 100 Hz and you read the buffer once a second, make sure the buffer is at least 100 scans.
    As far as your other question, like Dennis said, you don't have to do anything to reserve the memory. The operating system will take care of it for you.

  • Data retrieval buffers - buffer size and sort buffer size

    Any difference to tune BSO and ASO on data retrieval buffers?
    From Oracle documentation, the buffer size setting is per database per Essbase user i.e. more physical memory will be used if there are lots of concurrent data access from users.
    However even for 100 concurrent users, default buffer size of 10KB (BSO) or 20KB (ASO) seems very small compare to other cache setting (total buffer cache is 100*20 = 2MB). Should we increase the value to 1000KB to improve the data retrieval performance by users? The improvement impact is the same for online application e.g. Hyperion Planning and reporting application e.g. Financial Reporting?
    Assume 3 Essbase plan types with 100 concurrent access:
    PLAN1 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN2 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN3 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    Total physical memory required is 600MB.
    Thanks in advance!

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • "Streaming Buffer Size" pop-up menu does not exist under Advanced when I use iTunes - Preferences - Advanced. How can I increase buffer size?

    How can I change buffer size in iTunes 10? "Streaming Buffer Size" pop-up menu does not exist under Preferences--Advanced.

    Hi all, iTunes streaming victims.
    I struggled long with the streaming buffer problem of iTunes and indeed could solve it with increasing the buffer size, which features has been removed in iTunes 10. And this without updating the instructions by Apple!
    Here is my solutions:
    http://www.rogueamoeba.com/airfoil/
    perfect for all wireless audio streaming not only for itunes, but any source on the Mac.
    solved the problem completely for my 3 MacBooks, 2 airports, iphones etc.
    it is not for free but a very good value for money.
    cheers, Wil

  • Buffer size and recording delay

    Hi
    I use a Focusrite Saffire LE audio interface, and in my core audio window the buffer size is currently set at 256 and there is a recording delay of 28ms. I don't think I've ever altered these settings since I started to use Logic. I don't know if this is related, but I do notice a slight delay when I monitor what I'm recording -- a slight delay, for example, between my voice (as I'm recording it) through the headphones and my "direct" voice.
    Anyway, are these settings OK, or should they be altered for optimum performance?

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • Buffer Size - How low can you go

    I was wondering if you guys can exchange some information about your Buffer Size settings in Logic and how much milage you can get out of it.
    I upgraded to the new 8-core 2.8GHz MacPro a few weeks a go and thought I would live in 32Sample Buffer Size dreamland until the software developers come out with the next CPU hungry beast. But it looks like that a lot of the current Software Instruments bring down the new Intel Macs already to their knees.
    *This is my setup:*
    MacPro 2.8GHz 8-core, 12GB RAM, OSX 10.5.3, Logic 8.02, Fireface 800
    *This is my Problem:*
    If I'm looking at one channel of i.e. Sculpture, then all the 8 cores don't do me any good, because Logic can use only 1 core per channel strip. The additional cores come into place when I'm playing
    multiple tracks and Logic spread the CPU workload across those cores. So if I set the Buffer Size to the minimum of 32 Samples, then it comes down to one 2.8GHz processor and if he is powerful enough to process that one Software instrument without clicks and interruptions.
    Sculpture:
    Some of the patches are already so demanding that I reach the CPU limit when I play chords of four to eight notes with the 32 Sample Setting. If I add some "heavy" FX Plug-ins like amp modeling then I definitely reach the limit.
    _Trilogy and Atmosphere Wrapper:_
    Most of the time I have to increase the buffer size to 128 just to play a few notes. These "workaround wrapper plugins" are a plain joke from Spectrasonics and almost useless. There is plenty of discussion in various forums how they pizzzed off o lot of their customers with the way they handled their Intel transition regarding these two great plugins.
    *Audio Interface Considerations:*
    The different vendors of audio interfaces brag about their low latency of their devices. Especially Apogee's Symphony System was supposed to deliver extremely low latency. When they demoed their hardware at various Apple events, they played gazzillions of track and plug-ins and everything was running at 32 Buffer Size. I never saw however a demo where they loaded a gazzillions Sculpture instruments and showed that playing with 32 Buffer Size.
    *Here are my basic three questions:*
    1) Anybody experiencing already the same limitations with the new MacPros when it comes to intense Software Instruments
    2) Anybody uses the 3.2GHz machines and has better results, or is it just marginal.
    3) Anybody running the Symphony System and can throw any Software Instruments at it with 32 Buffer Size.
    +BTW, the OSX 10.5.3 update fixed the constant popping up of the "System Overload" window, but regarding the CPU load with Software Instruments, I don't see that much of an improvement.+

    My system is happy at 32 samples with the FF800... This is on the Jan 8 Core with 6 gig RAM, running X.5.3, Logic 8.0.2.
    Plugs include - NI Komplete5 bundle, Waves GOLD, BFD2 (thought this would stump it but it's fine with 2.0.4) Access Virus TI Snow.
    Safety IO buffer off and the Buffer set to small.

  • [svn] 1991: Also reducing HexEncoder buffer size from 64K to 32K to help with the smaller stack size of the AVM on Mac as part of fix for SDK-15232 .

    Revision: 1991
    Author: [email protected]
    Date: 2008-06-06 19:05:02 -0700 (Fri, 06 Jun 2008)
    Log Message:
    Also reducing HexEncoder buffer size from 64K to 32K to help with the smaller stack size of the AVM on Mac as part of fix for SDK-15232.
    QE: Yes, please test mx.utils.HexEncoder with ByteArrays larger than 64K on PC and Mac too.
    Doc: No
    Checkintests: Pass
    Bugs:
    SDK-15232
    Ticket Links:
    http://bugs.adobe.com/jira/browse/SDK-15232
    http://bugs.adobe.com/jira/browse/SDK-15232
    Modified Paths:
    flex/sdk/branches/3.0.x/frameworks/projects/rpc/src/mx/utils/HexEncoder.as

    I'm having this same issue. I also have this line in my log, which is curious:
    12/14/14 7:13:07.822 PM netbiosd[16766]: Attempt to use XPC with a MachService that has HideUntilCheckIn set. This will result in unpredictable behavior: com.apple.smbd
    Is this related to the problem? What does it mean?
    My 2010 27" iMac running Yosemite won't wake up from sleep.

  • Increase stream buffer size

    Hello everyone,
    my internet radio streams often stop or have to buffer although I have a bandwidth > 14.000 kBit/s. I read on forums that I should increase the stream buffer through the iTunes preferences. But I learned that the newer versions of iTunes don't have these preferences. Is there an alternative way? Maybe via terminal?
    Thank you in advance for your help!
    Greetings
    Danny

    Hi all, iTunes streaming victims.
    I struggled long with the streaming buffer problem of iTunes and indeed could solve it with increasing the buffer size, which features has been removed in iTunes 10. And this without updating the instructions by Apple!
    Here is my solutions:
    http://www.rogueamoeba.com/airfoil/
    perfect for all wireless audio streaming not only for itunes, but any source on the Mac.
    solved the problem completely for my 3 MacBooks, 2 airports, iphones etc.
    it is not for free but a very good value for money.
    cheers, Wil

  • Linux Serial NI-VISA - Can the buffer size be changed from 4096?

    I am communicating with a serial device on Linux, using LV 7.0 and NI-VISA. About a year and a half ago I had asked customer support if it was possible to change the buffer size for serial communication. At that time I was using NI-VISA 3.0. In my program the VISA function for setting the buffer size would send back an error of 1073676424, and the buffer would always remain at 4096, no matter what value was input into the buffer size control. The answer to this problem was that the error code was just a warning, letting you know that you could not change the buffer size on a Linux machine, and 4096 bytes was the pre-set buffer size (unchangeable). According to the person who was helping me: "The reason that it doesn't work on those platforms (Linux, Solaris, Mac OSX) is that is it simply unavailable in the POSIX serial API that VISA uses on these operating systems."
    Now I have upgraded to NI-VISA 3.4 and I am asking the same question. I notice that an error code is no longer sent when I input different values for the buffer size. However, in my program, the bytes returned from the device max out at 4096, no matter what value I input into the buffer size control. So, has VISA changed, and it is now possible to change the buffer size, but I am setting it up wrong? Or, have the error codes changed, but it is still not possible to change the buffer size on a Linux machine with NI-VISA?
    Thanks,
    Sam

    The buffer size still can't be set, but it seems that we are no longer returning the warning. We'll see if we can get the warning back for the next version of VISA.
    Thanks,
    Josh

  • What's the optimum buffer size?

    Hi everyone,
    I'm having trouble with my unzipping method. The thing is when I unzip a smaller file, like something 200 kb, it unzips fine. But when it comes to large files, like something 10000 kb large, it doesn't unzip at all!
    I'm guessing it has something to do with buffer size... or does it? Could someone please explain what is wrong?
    Here's my code:
    import java.io.*;
    import java.util.zip.*;
      * Utility class with methods to zip/unzip and gzip/gunzip files.
    public class ZipGzipper {
      public static final int BUF_SIZE = 8192;
      public static final int STATUS_OK          = 0;
      public static final int STATUS_OUT_FAIL    = 1; // No output stream.
      public static final int STATUS_ZIP_FAIL    = 2; // No zipped file
      public static final int STATUS_GZIP_FAIL   = 3; // No gzipped file
      public static final int STATUS_IN_FAIL     = 4; // No input stream.
      public static final int STATUS_UNZIP_FAIL  = 5; // No decompressed zip file
      public static final int STATUS_GUNZIP_FAIL = 6; // No decompressed gzip file
      private static String fMessages [] = {
        "Operation succeeded",
        "Failed to create output stream",
        "Failed to create zipped file",
        "Failed to create gzipped file",
        "Failed to open input stream",
        "Failed to decompress zip file",
        "Failed to decompress gzip file"
        *  Unzip the files from a zip archive into the given output directory.
        *  It is assumed the archive file ends in ".zip".
      public static int unzipFile (File file_input, File dir_output) {
        // Create a buffered zip stream to the archive file input.
        ZipInputStream zip_in_stream;
        try {
          FileInputStream in = new FileInputStream (file_input);
          BufferedInputStream source = new BufferedInputStream (in);
          zip_in_stream = new ZipInputStream (source);
        catch (IOException e) {
          return STATUS_IN_FAIL;
        // Need a buffer for reading from the input file.
        byte[] input_buffer = new byte[BUF_SIZE];
        int len = 0;
        // Loop through the entries in the ZIP archive and read
        // each compressed file.
        do {
          try {
            // Need to read the ZipEntry for each file in the archive
            ZipEntry zip_entry = zip_in_stream.getNextEntry ();
            if (zip_entry == null) break;
            // Use the ZipEntry name as that of the compressed file.
            File output_file = new File (dir_output, zip_entry.getName ());
            // Create a buffered output stream.
            FileOutputStream out = new FileOutputStream (output_file);
            BufferedOutputStream destination =
              new BufferedOutputStream (out, BUF_SIZE);
            // Reading from the zip input stream will decompress the data
            // which is then written to the output file.
            while ((len = zip_in_stream.read (input_buffer, 0, BUF_SIZE)) != -1)
              destination.write (input_buffer, 0, len);
            destination.flush (); // Insure all the data  is output
            out.close ();
          catch (IOException e) {
            return STATUS_GUNZIP_FAIL;
        } while (true);// Continue reading files from the archive
        try {
          zip_in_stream.close ();
        catch (IOException e) {}
        return STATUS_OK;
      } // unzipFile
    } // ZipGzipperThanks!!!!

    Any more hints on how to fix it? I've been fiddling
    around with it for an hour..... and throwing more
    exceptions. But I'm still no closer to debugging it!
    ThanksDid you add:
    e.printStackTrace();
    to your catch blocks?
    Didn't you in that case get an exception which says something similar to:
    java.io.FileNotFoundException: C:\TEMP\test\com\blah\icon.gif (The system cannot find the path specified)
         at java.io.FileOutputStream.open(Native Method)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
         at Test.unzipFile(Test.java:68)
         at Test.main(Test.java:10)Which says that the error is thrown here:
         // Create a buffered output stream.
         FileOutputStream out = new FileOutputStream(output_file);Kaj

  • Sychronize AO/AI buffered data graph and measure data more than buffer size

    I am trying to measure the response time (around 1ms) of the pressure drop indicated by AI channel 0 when the AO channel 0 gives a negetive single pulse to the unit under test (valve). DAQ board: Keithley KPCI-3108, LabView Version: 6.1, OS system: Win2000 professional.
    My problem is I am getting different timed graph between the AI and AO channels every time I run my program, except the first time I can get real time graph. I tried to decrease the buffer size less than the max buffer size of the DAQ board (2048 samples), but it still does unreal time graph from AI channel, seems it was still reading from old data in the buffer when AO writes the new buffer data, that is my guessing. In my p
    rogram, the AO and AI part is seperated, AO Write buffer is in a while loop while AI read is not. Would that be a problem? Or it's something else?
    Also, I am trying to measure data much larger than the board buffer size limit. Is it possible to make the measurement by modifying the program?
    I really appreciate any of your help. Thank you very much!
    Best,
    Jenna

    Jenna,
    You can modify the X-axis of a chart/graph in LabVIEW to display real-time. I have included a link below to an example program that illustrates how to do this.
    If you are doing a finite, buffered acquisition make sure that you are always reading everything from the buffer for each run. In other words, If you set a buffer size of 5000, then make sure you are reading 5000 scans (set number of scans to read to 5000). This will assure you are reading new data every time you run you program. You could always put the AI Read VI within a loop and read a smaller number from the buffer until the buffer is empty (monitor the scan backlog output of the AI Read VI to see how many scans are left in the buffer).
    You can set a buffer size larger than the FIFO
    buffer of the hardware. The buffer size you set in LabVIEW is actually a software buffer size within your computer's memory. The data is acquired with the hardware, stored temporarily within the on-board FIFO, transferred to the software buffer, and then read in LabVIEW.
    Are you trying to create a TTL square wave with the analog output of the DAQ device? If so, the DAQ device has counters that can generate a highly accurate digital pulse as well. Just a suggestion. LabVIEW has a variety of shipping examples that are geared toward using counters (find examples>>DAQ>>counters). I hope this helps.
    Real-Time Chart Example
    http://venus.ni.com/stage/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3E95556A4E034080020E74861&p_node=DZ52038&p_submitted=N&p_rank=&p_answer=&p_source=Internal
    Regards,
    Todd D.
    National Instruments
    Applications Engineer

  • Network Stream Error -314340 due to buffer size on the writer endpoint

    Hello everyone,
    I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
    The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
    We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
    The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
    Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
    I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
    Thanks,
    Curtiss

    Hi Curtiss!
    Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
    Regards,
    Kelly B.
    Applications Engineering
    National Instruments

  • How to read messages longer than network buffer size

    The logic of my application is:
    the client sends a request to the server and wait, in blocking mode, for its response.
    The server can responde with strings longer than 64KB (size of their sending and receiving buffer size), so under the hood, can also execs more than one socketChannel.write
    Nothing in the message says where it finish, nevertheless the client needs to assemble all in one big
    string.
    How can the client deal with this ? I'd like keep it as simple as possible (without using a selector)
    any thoughts ?
    thanks in advance

    Your above post suggests that it can send more than one packet (even ignoring the 64k limit.)
    In that case the data of the message must contain sufficient information. If not then the solution is not determinate.
    Ideally what you should received is a message and not just data. The message defines it contents. So you know how long it is and maybe even when it ends.
    Alternatively the data might contain something. For example if you are recieving well formatted XML then you can create a simple parser that just looks for the end tag. If it isn't well formatted, or at least you can not rely on that then it is much harder.

  • Where can I change the buffer size for LKM File to Oracle (EXTRENAL TABLE)?

    Hi all,
    I'd a problem on the buffer size the "LKM File to Oracle (EXTRENAL TABLE)" as follow:
    2801 : 72000 : java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-29400: data cartridge error
    KUP-04020: found record longer than buffer size supported, 524288, in D:\OraHome_1\oracledi\demo\file\PARTIAL_SHPT_FIXED_NHF.dat
    Do you know where can I change the buffer size?
    Remarks: The size of the file is ~2Mb.
    Tao

    Hi,
    The behavior is explained in Bug 4304609 .
    You will encounter ORA-29400 & KUP-04020 errors if the RECORDSIZE clause in the access parameters for the ORACLE_LOADER access driver is larger than 10MB and you are loading records larger than 10MB. Which means their is a another limitation on read size of a record which is termed as granule size. If the default granule size is less then RECORDSIZE it limits the size of the read buffer to granule size.
    Use the pxxtgranule_size parameter to change the size of the granule to a number larger than the size specified for the read buffer.You can use below query to determine the current size of the granule.
    SELECT KSPFTCTXPN PARAMETER_NUMBER,
    KSPPINM PARAMETER_NAME,
    KSPPITY PARAMETER_TYPE,
    KSPFTCTXVL PARAMETER_VALUE,
    KSPFTCTXDF IS_DEFAULT,
    KSPPIFLG MODIFICATION_FLAG,
    KSPFTCTXVF VALUE_FLAG
    FROM X$KSPPI X, X$KSPPCV2 Y
    WHERE (X.INDX+1) = KSPFTCTXPN AND
    KSPPINM LIKE '%_px_xtgranule_size%';
    There is no 'ideal' or recommended value for pxxtgranule_size parameter, it is safe to increase it to work around this particular problem. You can set this parameter using ALTER SESSION/SYSTEM command.
    SQL> alter system set "_px_xtgranule_size"=10000;
    Thanks,
    Sutirtha

  • How do you determine ip and op buffer size on a 3550-12G

    I have a Cisco 3550-12G switch and I want to check to see if the input buffers and the output buffers for port gi0/12 are the same size. Is there a simple way to do this, I tried using the show buffers command but I couldn't seem to find what I was looking for. Help!

    Hi,
    "The 3550 switch uses central buffering. This means that there are no fixed buffer sizes per port. However, there is a fixed number of packets on a Gigabit port that can be queued. This fixed number is 4096. By default, each queue in a Gigabit port can have up to 1024 packets, regardless of the packet size."
    http://www.cisco.com/warp/public/473/187.html#topic7
    HTH,
    Bobby
    *Please rate helpful posts.

Maybe you are looking for

  • AQ troubleshooting doc

    Hi All, I came across this good article on AQ troubleshooting. Thought of sharing it with all, hence, this post. Out of things mentioned in article, I have outlined a few steps, that helped me in our environment while troubleshooting AQ issues. How t

  • My itunes crushes after signing in ,what should I do ??

    When I try to sign in to my account with iTunes it simply crashes and terminate right after clicking continue in the welcome screen WHAT DO I DO ?? Skip reinstalling Apple products.

  • Dashcode Widget Plugin?

    Hi all, I'm trying to create a widget with plugin, however, I am getting runtime errors with the following code: function _lolwut(event) var macbot = window.mbot; var spoken = document.getElementById("textfield").value; var speak = macbot.talkToMacbo

  • Upgrading from Photoshop Elements 4 to Photoship Elements 12

    How do I get new Photoshop Elements 12 organizer to incorporate old Photoshop 4 organization?

  • Oracle.jbo.PCollException: JBO-28030

    Hello, I'm made jsp page with business components(one simple table view) and when in JDeveloper I run project then receive this message in browser> oracle.jbo.PCollException: JBO-28030: Nebylo možné vložit řádek do tabulky PS_TXN, id sběru 2 103 390,