More of a forum request than AA

This forum is difficult to navigate for some of us who are used to forums designed like Audio Masters. Since Adobe is considered a leader in software, why not offer a better BB?

[email protected] wrote:
> This forum is difficult to navigate for some of us who are used to
> forums designed like Audio Masters. Since Adobe is considered a
> leader in software, why not offer a better BB?
Use a newsreader instead of the web interface and be happy.
Kind regards
Peter Larsen

Similar Messages

  • Connect via SQL*Plus taking more time in Oracle 11 than in 10

    Oracle Version
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Productioncurrently migrating from
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
    NLSRTL Version 10.2.0.4.0 - ProductionOS is AIX 5.3 for Oracle 10 and AIX 6.1 for Oracle 11.
    We are currently in the process of migrating some applications from Oracle 10 to 11. Our admins have setup a new development system for us with an Oracle 11 instance. Regarding performance the new system behaves more or less the same than our old, e.g. when executing SQLs we notice hardly any performance differences and those we find are slightly in favour of the new system.
    But now we discovered that the time it takes to establish a database connection via SQL*Plus is longer on the Oracle 11 system. Running the following code
    sqlplus -s user/pw@database <<END
    quit
    ENDa thousand times takes ~60s under Oracle 10 but ~140s under Oracle 11. This may not seem much but we are running a test framework consisting of a bunch of shell scripts where several thousand connections are openend via SQL*Plus to execute some SQLs, so that even this small time difference results in rather huge difference in total runtime. The SQLs themself require roughly the same time in both databases, as already mentioned above they are in fact slightly faster in Oracle 11.
    To analyze the time difference I ran a trace for the connect with the following parameters in sqlnet.ora
    TRACE_LEVEL_CLIENT=4
    TRACE_UNIQUE_CLIENT=ONand found out that there is a time difference of about 70ms during the connect handshake:
    Oracle 10
    [22-MAR-2013 12:13:09:595] nscon: doing connect handshake...
    [22-MAR-2013 12:13:09:595] nscon: sending NSPTCN packet
    [22-MAR-2013 12:13:09:621] nscon: got NSPTRS packet
    [22-MAR-2013 12:13:09:621] nscon: sending NSPTCN packetOracle 11
    (1) [22-MAR-2013 12:15:26:812] nscon: doing connect handshake...
    (1) [22-MAR-2013 12:15:26:812] nscon: sending NSPTCN packet
    (1) [22-MAR-2013 12:15:26:906] nscon: got NSPTRS packet
    (1) [22-MAR-2013 12:15:26:906] nscon: sending NSPTCN packetUnder Oracle 10 there are 26ms between sending the NSPTCN packet and getting the NSPTRS packet, whereas under Oracle 11 this takes 94ms. I ran the trace again, this time with
    TRACE_LEVEL_CLIENT=16and got the following results for the critical interval:
    Oracle 10:
    [22-MAR-2013 13:17:37:638] nscon: sending NSPTCN packet
    [22-MAR-2013 13:17:37:638] nspsend: entry
    [22-MAR-2013 13:17:37:638] nspsend: plen=218, type=1
    [22-MAR-2013 13:17:37:638] nttwr: entry
    [22-MAR-2013 13:17:37:638] nttwr: socket 9 had bytes written=218
    [22-MAR-2013 13:17:37:638] nttwr: exit
    [22-MAR-2013 13:17:37:638] nspsend: packet dump
    <<packet dump removed>>
    [22-MAR-2013 13:17:37:638] nspsend: 218 bytes to transport
    [22-MAR-2013 13:17:37:638] nspsend: normal exit
    [22-MAR-2013 13:17:37:638] nscon: exit (0)
    [22-MAR-2013 13:17:37:638] nsdo: nsctxrnk=0
    [22-MAR-2013 13:17:37:638] nsdo: normal exit
    [22-MAR-2013 13:17:37:638] nsdo: entry
    [22-MAR-2013 13:17:37:638] nsdo: cid=0, opcode=68, *bl=512, *what=9, uflgs=0x0, cflgs=0x3
    [22-MAR-2013 13:17:37:638] nsdo: rank=64, nsctxrnk=0
    [22-MAR-2013 13:17:37:638] nsdo: nsctx: state=2, flg=0x4005, mvd=0
    [22-MAR-2013 13:17:37:638] nsdo: gtn=10, gtc=10, ptn=10, ptc=2011
    [22-MAR-2013 13:17:37:638] nscon: entry
    [22-MAR-2013 13:17:37:638] nscon: recving a packet
    [22-MAR-2013 13:17:37:638] nsprecv: entry
    [22-MAR-2013 13:17:37:638] nsprecv: reading from transport...
    [22-MAR-2013 13:17:37:638] nttrd: entry
    [22-MAR-2013 13:17:37:665] nttrd: socket 9 had bytes read=8
    [22-MAR-2013 13:17:37:665] nttrd: exit
    [22-MAR-2013 13:17:37:665] nsprecv: 8 bytes from transport
    [22-MAR-2013 13:17:37:665] nsprecv: tlen=8, plen=8, type=11
    [22-MAR-2013 13:17:37:665] nsprecv: packet dump
    [22-MAR-2013 13:17:37:665] nsprecv: 00 08 00 00 0B 00 00 00  |........|
    [22-MAR-2013 13:17:37:665] nsprecv: normal exit
    [22-MAR-2013 13:17:37:665] nscon: got NSPTRS packetOracle 11
    (1) [22-MAR-2013 13:33:40:504] nscon: sending NSPTCN packet
    (1) [22-MAR-2013 13:33:40:504] nspsend: entry
    (1) [22-MAR-2013 13:33:40:504] nspsend: plen=205, type=1
    (1) [22-MAR-2013 13:33:40:504] nttwr: entry
    (1) [22-MAR-2013 13:33:40:504] nttwr: socket 8 had bytes written=205
    (1) [22-MAR-2013 13:33:40:504] nttwr: exit
    (1) [22-MAR-2013 13:33:40:504] nspsend: packet dump
    <<packet dump removed>>
    (1) [22-MAR-2013 13:33:40:505] nspsend: 205 bytes to transport
    (1) [22-MAR-2013 13:33:40:505] nspsend: normal exit
    (1) [22-MAR-2013 13:33:40:505] nscon: exit (0)
    (1) [22-MAR-2013 13:33:40:505] snsbitts_ts: entry
    (1) [22-MAR-2013 13:33:40:505] snsbitts_ts: acquired the bit
    (1) [22-MAR-2013 13:33:40:505] snsbitts_ts: normal exit
    (1) [22-MAR-2013 13:33:40:505] nsdo: nsctxrnk=0
    (1) [22-MAR-2013 13:33:40:505] snsbitcl_ts: entry
    (1) [22-MAR-2013 13:33:40:505] snsbitcl_ts: normal exit
    (1) [22-MAR-2013 13:33:40:505] nsdo: normal exit
    (1) [22-MAR-2013 13:33:40:505] nsdo: entry
    (1) [22-MAR-2013 13:33:40:505] nsdo: cid=0, opcode=68, *bl=2048, *what=9, uflgs=0x0, cflgs=0x3
    (1) [22-MAR-2013 13:33:40:505] snsbitts_ts: entry
    (1) [22-MAR-2013 13:33:40:505] snsbitts_ts: acquired the bit
    (1) [22-MAR-2013 13:33:40:505] snsbitts_ts: normal exit
    (1) [22-MAR-2013 13:33:40:505] nsdo: rank=64, nsctxrnk=0
    (1) [22-MAR-2013 13:33:40:505] snsbitcl_ts: entry
    (1) [22-MAR-2013 13:33:40:505] snsbitcl_ts: normal exit
    (1) [22-MAR-2013 13:33:40:505] nsdo: nsctx: state=2, flg=0x4005, mvd=0
    (1) [22-MAR-2013 13:33:40:505] nsdo: gtn=10, gtc=10, ptn=10, ptc=8155
    (1) [22-MAR-2013 13:33:40:505] nscon: entry
    (1) [22-MAR-2013 13:33:40:505] nscon: recving a packet
    (1) [22-MAR-2013 13:33:40:505] nsprecv: entry
    (1) [22-MAR-2013 13:33:40:505] nsprecv: reading from transport...
    (1) [22-MAR-2013 13:33:40:505] nttrd: entry
    (1) [22-MAR-2013 13:33:40:618] nttrd: socket 8 had bytes read=8
    (1) [22-MAR-2013 13:33:40:618] nttrd: exit
    (1) [22-MAR-2013 13:33:40:618] nsprecv: 8 bytes from transport
    (1) [22-MAR-2013 13:33:40:618] nsprecv: tlen=8, plen=8, type=11
    (1) [22-MAR-2013 13:33:40:618] nsprecv: packet dump
    (1) [22-MAR-2013 13:33:40:618] nsprecv: 00 08 00 00 0B 00 00 00  |........|
    (1) [22-MAR-2013 13:33:40:618] nsprecv: normal exit
    (1) [22-MAR-2013 13:33:40:618] nscon: got NSPTRS packetAny ideas what could be the reason for this time difference? Something in our network configuration or something else?

    With local connections - I do not think a TCP packet send from an IP to the same IP, leaves the interface as an actual wire protocol/signal. If I'm correct, then running local connection tests will be mostly useless in checking the actual network infrastructure.
    Tests 3 and 4 should be showing the same connection times as the same physical network infrastructure is used - only the direction is reversed in the tests.
    I would assume that port settings on the switches and interface settings on the routers treat packets equally in both directions between 2 servers. But this could in part explain the problem if this is not the case. In a case of a router for example, the 1st test's ingress interface is the egress interface of the 2nd test (and vice versa). Configurations can differ substantially between interfaces on the same router. Likewise if there is a firewall - as different rule sets are applied in each test and these rule sets could differ.
    So I would not be too quick to state that this is definitely not a network problem. But I agree that based on the small percentage difference (assuming comparable tests), it does not look like a network issue.
    The next step is to determine what the delay is between the listener accepting the client connection, and the connection being serviced by a dedicated server process.
    This will require listener tracing - tracing the time from when the listener accepted the connection (and parsed the TNS connection string), to handing off the connection to the dedicated server process.
    As a comparison test, you can also test shared server connections. Dispatcher processes (of the db instance) register themselves with the listener. The shared server client hand off is thus done to an existing server process - no need for the Listener to make a kernel call to load and initialise an executable image.
    Shared connections are typically faster than dedicated connections in this respect.
    If there is a major time difference, then it means some kind of issue with the listener dealing with dedicated servers as oppose to dispatcher hand off's. As both connections would have had very similar network transit time - which means the connection time difference is related directly to dealing with a dedicated server connection request and hand off.
    You can also substitute the oracle executable with a wrapper - and troubleshoot the actual dedicated server startup. I've only done this with Oracle XE 10.2 though and with local IPC connections. Unsure how robust this will be for testing purposes via TCP using 11g.

  • Office 365 Streaming Notifications, "One or more subscriptions in the request reside on another Client Access server."

    Hello all,
    I am maintaining a part of our product that requires monitoring mailboxes for events.  This is currently being done by using streaming connections for getting the notifications.  Our solution has been successful for situations with smaller numbers
    of mailboxes, ~200 or less.  However we are seeing some issues when scaling up to say, 5000 mailboxes.
    The error and the sequence leading up to it are as follows:
    Make an Exchange Service Account.
    exchSvc.ConnectionGroupName = someGroupName;
    add to the httpheaders ("X-AnchorMailbox", userSmtp) and ("X-PreferServerAffinity", "true");
    create a new impersonated UserId for the userSmtp address that is our anchor mailbox.
    set the Exchange Service account ImpersonatedUserID to the one we just made.
    ExchangeServiceAccount.SubscribeToStreamingNotifications(new FolderId[] { WellKnownFolderName.Inbox }, _mailEvents);
    to this point everything was successful, saw no error messages.
    we create a second impersonated UserID for a different mailbox, and repeat the process above from that step forward.  Upon the final step, subscribing to the streaming notifications we get the error:
    Exception: Microsoft.Exchange.WebServices.Data.ServiceResponseException: One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request.
    This is only the second subscription that we are trying to add to this connection, and it is to a different mailbox than the first.
    Can anyone please help point me to where this is going wrong?

    >> Is there a good way to verify the number of subscriptions in a group?
    Not that I know of you should be tracking this in your code there are no server side operations in EWS to even tell you if there are active subscriptions on a mailbox.
    >>The error I am getting is on the second subscription in a new group, just after doing the anchor mailbox so I don't think we are hitting the 200 limit. 
    It's hard to say without seeing your code but it sounds like there is problem with your grouping code. One way to validate this is that with every request you make with the EWS managed API there is a
    RequestId header http://blogs.msdn.com/b/exchangedev/archive/2012/06/18/exchange-web-services-managed-api-1-2-1-now-released.aspx
    you should be able to give that RequestId to the Office365 support people and they should be able to check the EWS Log on the server and tell you more about what's happening (it maybe server side bug). Something doesn't quite add up in that the X-BackEndOverrideCookie
    is what ultimately determines what server the request ends up at and the error is essentially telling you its ending up at the wrong server (have you looked at the headers on the error message?). Is it always one group of users that fails have
    you tried different groups and different combinations etc.
    Cheers
    Glen

  • One or more of the updates requested by Final Cut Server could not be...

    We are getting the warning " One or more of the updates requested by Final Cut Server could not be applied. (one or more replacement file(s) had unexpected track settings.)" when a FCPv6.06 (possibly also FCPv7.0) project file is checked-out of Final Cut Server v1.5.1. The assets linked to the project are cached down successfully (this in not an XSAN environment) but opening the project alerts to some offline media files (not all) and we need to manually relink them to the assets in the Final Cut Server cache. All is good again until the project file is checked-out of FCSvr to another workstation.
    Having a look at the path of the offline files it is referencing the path of the assets as they were on the previous editor's (different) workstation. None of our workstations have the same path to the Final Cut Server cache.
    My understanding and experience is that FCSvr manages the paths for FCP linked assets and modifies the checked-out file's path to point to the workstation's FCSvr cache?
    Can you tell me how I can resolve this issue?

    I may have resolved this issue today or I'm at a stage where I can't replicate it. The problem may have been propagating from a within a template FCP project file we use to start our productions. I assume it was a corruption of the FCP file that FCSvr didn't recognise?

  • I have about 800 more songs on my nano than itunes, due to a crash. How can I get the songs onto itunes from the ipod, or at least get the new music ive gotten since the crash onto my ipod without deleting all the other songs when it syncs?

    i have about 800 more songs on my nano than itunes, due to a crash. How can I get the songs onto itunes from the ipod, or at least get the new music ive gotten since the crash onto my ipod without deleting all the other songs when it syncs?

    It has always been very basic to always maintain a backup copy of your computer for just such an occasion.  Use your backup copy to put everything back.
    If you have failed to bakup, then you can transfer itunes purchases from an ipod: File>Transfer Purchases

  • AUDIO TO SCORE is gone and it was much more precise for drum doubling than the doubling function itself.

    AUDIO TO SCORE. I used that all the time for drum doubling for the following reasons:
    In Logic 9 i always used audio to score for drum doubling because it had much more control over essential parameters than the drum replacement window. Parameters like: Granulation, Smooth Release and Attack Range made sure that i didn't have any flams, too few notes, too many notes etc etc. AND you could see in the sample editor (below the waveform) immediately the effect of the changes made in those parameters!
    Now with these parameters gone and only the drum doubling function left, what was a real pro function has become much more cumbersome to use. You can no longer see in detail what changes in parameters does to the out-coming midi. And the lack of the detailed parameters makes drum doubling drop or miss notes, makes flams with the original drum track etc etc.... In short. The drum doubling function is inadequate for pro results.
    So this is my suggestion: Apple probably wont reinstate audio to score as it was (i'd love it if they did, it's essential to me and others who does rock mixes in logic instead of protools). But you can do something that's even better:
    In the audio file editor (previously the sample editor) make the old audio to score into an advanced drum doubling/replacement editor. Take the audio to score window and parameters and reinstate them BUT instead of outputting different note values make them output to a desired note of the users choice (e.g C1). Without opening the library. It cluters the window and often people want to use their own drum software than apples samples.
    It's the realtime view in the bottom of the audiofile editor that makes a difference. And the missing parameters. Without these things a good outcome is based on luck rather than visual reference.

    I was doing drum replacement ( adding ) with Audio-to-score for years, and must say, drum replacement is working for me without any flams and much better and 2x faster. I think it is pro feature, with advanced transient detector. Only problem is velocity,as in Audio-to-score.
    If automatic threshold not working on some material, you can do  fine transient edits in audio editor with visual realtime feedback, you can change output note and delay globally. Also you can hear ( in compare with Audio-to-score), that's why library is opened and sample replacement -after, not difficult to do.

  • EWS - Office 365 - "One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request."

    Hello
    My goal is to subscribe for streaming notifications for multiple users in the same time.
    One way to do that is to create multiple  StreamingSubscriptionConnections each one should contain one  StreamingSubscription for each user. The problem with this method is that in Office 365 the maximum
    number of connections opened is 20.
    Another method to solve this problem is by creating one StreamingSubscriptionConnection and then all StreamingSubscriptions for each user to the connection. This method solves the maximum number of connections
    problem and it works fine with exchange onPrimises. But when trying it with Office 365 it will result with the SubscriptionError:
    "One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request."
    Can anyone help me here ? 

    With Office365 you need to Group your subscriptions and set the Affinityheaders see
    http://msdn.microsoft.com/en-us/library/office/dn458789(v=exchg.150).aspx and
    http://blogs.msdn.com/b/mstehle/archive/2013/07/17/more-affinity-considerations-for-exchange-online-and-exchange-2013.aspx . Take note of the restrictions on the Group and other throttling restrictions if your using only one service account.
    Cheers
    Glen

  • When i create a circle in photoshop and import it to Motion 5, it becomes more of an oval shape than a circle in motion.

    When i create a circle in photoshop and import it to Motion 5, it becomes more of an oval shape than a circle.even though the size of the document i worked in photoshop is the same as that in motion 5. which is Pal D1/DV 720 x 576.
    Thanks

    hi,
    check the pixel aspect ratio in both the photoshop file and Motion project and make sure they match too. Or simply resize in Motion until round again. or even easier create the circle in Motion directly.
    hth
    adam

  • Multithreaded File Copy takes more time 1.5 times than single thread.

    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.nio.channels.FileChannel;
    public class TestMulti implements Runnable {
         public static Thread Th1;
         public static Thread Th2;
         String str = null;
         static int seqNumber = 1000000000;
         public static void main(String args[]) {
              Th1 = new Thread(new TestMulti("1_1"));
              Th2 = new Thread(new TestMulti("1_2"));
              Th1.start();
              Th2.start();
              try {
                   Th1.join();
                   Th2.join();
              } catch (Exception e) {
                   e.printStackTrace();
         public TestMulti(String str) {
              this.str = str;
         public void run() {
              File f = new File("C:/Songs2/" + str);
              File files[] = f.listFiles();
              String fileName = "";
              String seqName = "";
              String seq = "";
              int sequenceNo = 0;
              try {
                   for (int j = 0; j < files.length; j++) {
                        File musicFiles[] = files[j].listFiles();
                        for (int k = 0; k < musicFiles.length; k++) {
                             seq = "18072006";
                             seqName = seq + seqNumber;
                             sequenceNo = 10000 + seqNumber % 100;
                             seqNumber = seqNumber + 1;
                             fileName = musicFiles[k].getName();
                             String fileExt = fileName.substring(fileName.length() - 3,fileName.length());
                             String targetFile = "C:/Songs1/" + sequenceNo;
                             File fi = new File(targetFile);
                             if (!fi.exists()) { fi.mkdir(); }
                             targetFile = "C:/Songs1/" + sequenceNo + "/" + seqName+ "." + fileExt;
                             FileInputStream fin = new FileInputStream(musicFiles[k]);
                             FileChannel fcin = fin.getChannel();
                             FileOutputStream fout = new FileOutputStream(targetFile);
                             FileChannel fcout = fout.getChannel();
                             fcin.transferTo(0, fcin.size(), fcout);
                             fout.flush();
                             fcout.close();
                             fcin.close();
                             fout.close();
                             fin.close();
              } catch (Exception e) {
                   e.printStackTrace();
    Multithreaded File Copy takes more time 1.5 times than single thread.
    Is there any issue with this code. Please help me.

    If all of your threads are doing CPU-intensive work, or all are doing I/O to the same interface (for example, writing to the same physical disk), then multithreading would not be expected to help you.
    Multithreading does not magically make your CPU able to do more work per unit time than it could otherwise.
    Multithreading does not magically make your network interface or disk controller able to pump more bytes through than it could otherwise.
    Where multithreading helps (some or all of this has already been mentioned):
    * When you have multiple, independent CPU-bound tasks AND multiple CPUs available on which to execute them.
    * When you have tasks that involve a mix of CPU-bound and I/O-bound work. The CPU-bound stuff can crank while the I/O-bound stuff waits for bytes to be written or read, thus making use of what would otherwise be CPU "dead time."
    What you're doing does not fit either of those scenarios. Copying a file is pure I/O. If the source and destination file are on the same phsyical disk or controller, adding threads only adds overhead with no real possibility to do more work per unit time.
    If your source and destination are on different disks or controllers, then it's possible that you could get some benefit from multithreading. While one thread is waiting for bytes to be written to the target disk, the other thread can be reading from the source disk.

  • More tracks on my iPod than in my library??

    When I upgrade my iPod, it doesn't take into account all the changes I've made in my library, eg I've changed the artist name for a podcast and the file now appears under both the old and the new name. So my iTunes library and my iPod library don't coincide exactly and I have more tracks on my iPod than on iTunes... What should I do??
    Thanks!

    Do you manualy update or auto update the iPod?
    Auto update should fix this.
    I auto update which is why I can't figure out why I don't end up with 2 identical libraries...

  • My ipad has more songs in my library than what is on my computer.  How do I transfer all of my music from my ipad to my computer

    My ipad has more songs in my library than what is on my computer.  How do I transfer all of my music from my ipad to my computer

    Only if the songs were bought from iTunes, you use transfer purchases.
    If not, you need a 3rd party tools.

  • If you have more music on the computer than nano can store...

    If you have more music on the computer than nano can store, how does it choose what goes on the iPod? LIke, a normal synching just dumps everything, what happens when the iPod can't hold everything?

    Hi Eric,
    Your computer will pop up with a message that it failed to update any song to the Nano. You must creat a new playlist and copy the songs there, and make sure the size is not larger than Nano can hold

  • ITunes saying I have more songs on my iPod than I actually do. Help!

    I currently have a good amount of music on my iPod touch and I still have over 10gb left. But when I try to synced up my iPod to the laptop, iTunes won't complete the sync, saying I have several thousand more songs on my iPod than I really do. And then it goes on to tell me that I need 11gb more to finish the sync!
    What's happening? It was working and syncing perfectly fine yesterday.

    on iTunes, select your device and check the songs/albums you would like to sync with it.Do the same with your another device. This should sync the selected songs/albums only for that device.

  • LIVE HTTP STREAMING - More abst request than frag

    Hi
    this is the test suite :
    wowza developer edition , no additional configuration . ( on my computer or on amazon, is the same )
    Adobe flash media live encoder 3.2
    An osmf 2.0 player.
    when i start to playback, this is a call stack. As you can see, osmf ask for abst often than fragment?
    why?
    this is a big problem in a "pay for get" environment.
    Thanks for any advice.
    manifest.f4m
    localhost/live/myStream 
    GET
    200
    OK 
    text/xml
    Other
    1.29KB
    1.14KB 
    18ms
    17ms 
    17ms 
    1ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    16ms
    14ms 
    14ms 
    2ms
    Seg1-Frag221
    localhost/live/myStream/media_b280893_w1954320162.abst 
    GET
    200
    OK 
    video/mp4
    Other
    529.34KB
    529.19KB 
    59ms
    53ms 
    53ms 
    6ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    19ms
    18ms 
    18ms 
    1ms
    Seg1-Frag222
    localhost/live/myStream/media_b280893_w1954320162.abst 
    GET
    200
    OK 
    video/mp4
    Other
    166.49KB
    166.34KB 
    29ms
    26ms 
    26ms 
    3ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    14ms
    13ms 
    13ms 
    1ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    14ms
    14ms 
    14ms 
    0ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    16ms
    15ms 
    15ms 
    1ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    15ms
    14ms 
    14ms 
    1ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    14ms
    13ms 
    13ms 
    1ms
    Seg1-Frag223
    localhost/live/myStream/media_b280893_w1954320162.abst 
    GET
    200
    OK 
    video/mp4
    Other
    360.23KB
    360.08KB 
    51ms
    45ms 
    45ms 
    6ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    16ms
    14ms 
    14ms 
    2ms
    Seg1-Frag224
    localhost/live/myStream/media_b280893_w1954320162.abst 
    GET
    200
    OK 
    video/mp4
    Other
    532.51KB
    532.36KB 
    59ms
    52ms 
    52ms 
    7ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    19ms
    18ms 
    18ms 
    1ms
    Seg1-Frag225
    localhost/live/myStream/media_b280893_w1954320162.abst 
    GET
    200
    OK 
    video/mp4
    Other
    166.04KB
    165.88KB 
    33ms
    29ms 
    29ms 
    4ms
    playlist_b280893_w1954320162.abst
    localhost/live/myStream 
    GET
    200
    OK 
    binary/octet
    Other
    339B
    185B 
    14ms
    13ms 
    13ms 
    1ms
    Seg1-Frag226
    localhost/live/myStream/media_b280893_w1954320162.abst 
    GET
    200
    OK 
    video/mp4
    Other
    362.01KB
    361.86KB 
    53ms
    47ms 

    This is how live HDS streaming works. The abst requests are for the index file, or bootstrap, which contains the fragment run table telling the player the available fragments to request and also telling it if the streams are still live.  The player can only request the fragments it knows about (unless you enable the best effort fetch feature), and it must get them from the boostrap. It will continue to make bootstraps requests until either the stream goes from live to not live, or until it gets a fragment run table containing the next fragment number.

  • More records in Purchasing Cube than in PSA.

    Dear Folks,
    Right now i'm working on 0PUR_C01 Cube.In Quality server, i checked the data in Cube after executing the DTP and found that records are getting increased.Records in PSA(2152),but in Cube(2359),there are some routines in the Transformation checked them too they are ok.Transformation has 2 rule groups.And interestng thing is that we had the KF ZDCOUNT with rule type constant(1), and i found that for some records count=0.
    appreciate your quick reply.
    Regards,
    Rams.

    Hi
    As you have small no of records, run the DTP in Debud mode ( simulation mode) and check why you are getting 0 values from count key figure.
    If you have any end or Expert routine, you may get more records in target than source.
    Regards,
    Venkatesh

Maybe you are looking for