Why is drum machine designer pre-loading tons of busses.... ?

Why is drum machine designer pre-loading tons of busses.... ? Is there a way to turn off the pre mapping ? I loaded up a few presets and the routing is insane. Not simple to follow the chain and I have a huge monitor haha. Instant bog down of my computer.

The DMD is really a preset combo of Samples and FX.....
If you notice... the DMD 'plugin' isn't actually a 'normal' plugin even though it goes into the instrument slot...
It doesn't have a Power button for example  and you cannot drag the DMD instrument by itself to another track's instrument slot...
It's actually Ultrabeat running in the background... playing samples plus FX plugins on various Auxes plus a fancy front end..... to manipulate the sounds all via a very clever bit of work in the environment
To see whats going on under the hood.. go to the Environment window.... and you will see this...
Note: This is just one track using the DMD.....

Similar Messages

  • Drum machine designer bug ?? ( or is it just me ?? )

    when I use the drum machine designer I cant reset the channel strip and the normal arrows to change the plug in are not there nor are the on/off button but it is there for every other plug in ??? The picture is showing the missing power and arrows that normally bring up the plug in list

    so you can't just choose another instrument
    From the choices given to you in the DMD library yes... just select the pad and then select the pad sound you wish to use from the library browser...
    I think its weird that you can't bring up the plug in list
    Agajn Please read the info in the link I provided... DMD is NOT a plugin.. even though it resides in the Instrument slot.... Its a 'special' type of patch....
    To delete a DMD track.. just delete the main track and add a new one...

  • Pre-loaded Ring Tones Silent after installing Tone Room

    I recently installed Tone Room and now my ring tones are all silent. I uninstalled Tone Room (I had never even tried one of their tones) but I still all I get now is vibrate even though I've set my notification profile to Loud. I went into the pre loaded ringtones file and even though they are all there they don't have any sound. They are all listed at 0:00 duration.
    Can anybody suggest how I can retrieve the original ringtones.? If I do a restore from a previous back-up what file should I restore?
    Thanks in advance
    Madrone
    Solved!
    Go to Solution.

    I just solved this problem by doing a soft reset.
    Madrone

  • Why do all of the drum kits in Logic Pro X have a delay? All of my drum hits (drum kits only, not drum machine) are behind the beat.

    When I turn on the metronome, it's in sync with EVERYTHING except the drum kits.
    Is anyone else having this problem?
    When I change the plugin on the these tracks to a LPX drum machine or to a 3rd party drum plugin  the problem goes away.

    Is Plug-in Latency Compensation enabled?
    Logic Pro X>Preferences>Audio>General

  • How do u load samples in to samplers/drum machines

    hi iv been using logik a short while and have simply been draggin and dropping my samples (kick.hi's , snares etc) staight in to the arrangment, but i was wondering how i get my samples to route through the samplers and drum machines so i can alter them etc ,,,
    thankx in advance
    kirk

    You certainly can, & in the long run it will be much easier than doing it the hard way as you are now. But since it isn't easily explained in a Forum discussion, you should take Eriksimon's advice & read the chapter in the manual he mentions.
    Your other option is to check out Ultrabeat.
    If you get stuck, let us know.
    (If you had been talking about Loops, it would have been a different kettle of fish!)

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • [LPX] Drummer/Drum Kit Designer memory leak causing crash

    After working with LPX for a couple of days after release with almost no problems save for a few system overload errors that always seemed to sort themselves out, I loaded up a project I had been working on to find that it crashed every time I tried to play the track. Other, smaller projects, still played in varying degrees, some having to wait for a long time seemingly for samplers to load up etc.
    I did some testing (painstaking, considering that I was having to force-quit the application for my computer to even run again; as an aside there were never any error reports for whatever reason) and found that it appears to be Drummer/Drum Kit Designer that's causing massive memory usage. Looking at the activity monitor when loading the project revealed that the memory usage would just go up and up until it maxed out my (rather modest) 4GB RAM, at which point there would be no free memory and the computer would slow almost to a standstill. Removing all instances of Drum Kit Designer and Drummer reduces the load to a point where it is once again manageable.
    This seems like a fault in Drummer, I'm thinking possibly a memory leak. Has anyone else been experiencing this problem/anyone have any thoughts as to how it could be fixed?
    Thanks in advance,
    Jasper

    This is not a memory bug. It's simply the nature of the new Drummer.
    Drummer uses a LOT of samples. It's a 13GB download for a reason. You will need more than 4GB to use drummer and not run into issues.
    The nature of the modern Logic engine - which seems unchanged from v9 - makes it very bad at being able to tell when it's running out of memory. Logic will simply freeze or crash most of the time it runs out. The freeze would be down to it using literally every last MB on your RAM without overstepping the boundary, and the crash would be downt to it claiming to need more real RAM than you have spare.
    Producer kits use 5.1 surround samples, and submixes them to stereo in a way that posistions them around the space. Whilst a doddle for a modern machine, you Mac is rather long in the tooth to be trying to do that whilst struggling to handle 1.2GB patches with 4GB of RAM total.

  • Pre-Loaded 5th Generation i-Pod

    Hey Everyone -
    First off I want to say that I just received my first ever i-Pod, which is a fifth generation unit that is black in color. It was purchased for me by a friend who pre-loaded it with a ton of music for me. I went to start loading music videos to my unit when I started bumping into a problem.
    I share a computer with my wife, who has an older, non-color display i-Pod (our computer is an IBM NetVista running Windows 2000). She already has i-Tunes on our computer and in i-Tunes, she has an extensive library with her music files. When I go to plug in my new i-Pod, i-Tunes detects the unit and asks if I'd like to sync my unit with my wife's library that is already on our computer. I always select "no" as I do not want to lose the existing music files that were pre-loaded onto my i-Pod by my friend and have it replaced with my wife's. Once I select "no", it seems as though there is nothing else that can be done in i-Tunes that will allow it to detect my unit, create a library for me based on the music files that I already have on my unit and allow me to purchase my own music files to be added to my i-Pod. Is there any way around this? Any help you can offer would be greatly appreciated and Thanks in advace for your advice.

    What you are trying to do is considered music piracy, and it's illegal.
    You cannot keep the content that your friend put on it.
    The Apple iPod was designed specifically to hinder this type of illegal file sharing.
    If you want the tracks that your friend has, I suggest you purchase them legally for yourself.
    Please don't steal music!

  • Data Pre Load

    Hi,
    I have two servers with Server 1 and Server 2(I mean two different machine) and both the servers are configured with Distributed cache. For example Server 1 have a cache name "ABC" and Server 2 have a cache name "XYZ".
    I want to pre load the same data to both the servers with different cache names and my JOB java class is pointing to tangosol override file for Server 1.
    How can I load the data to server 2 with server 1 tangosol override file?
    I dont want to use push replication patteren.
    Thanks,
    Raj.

    Hi Raj
    You cannot get the storage enabled members from an Extend client as the cache service will be a RemoteCacheService or more likely a SafeCacheService wrapping a RemoteCacheService.
    I am not sure I follow exactly what you are trying to achieve from your posts above.
    Why do you want to get the storage enabled members - is is so that you can make the same call invocationService.execute(task,Collections.singleton(members.toArray()[0]), null); to cluster two from cluster one?
    If you want to do this then you need two invocable, one is LoaderInvocable class you mention above and the other looks like this:
    public class RemoteLoaderInvocable extends AbstractInvocable {
        private String cacheName;
        private String springBeanId;
        public RemoteLoaderInvocable() {
        public RemoteLoaderInvocable(String cacheName, String springBeanId) {
            this.cacheName = cacheName;
            this.springBeanId = springBeanId;
        @Override
        public void run() {
            NamedCache cache = CacheFactory.getCache(cacheName);
            Set<Member> memberSet = ((PartitionedService) cache.getCacheService()).getOwnershipEnabledMembers();
            Member[] members = memberSet.toArray(new Member[memberSet.size()]);
            LoaderInvocable task = new LoaderInvocable(cacheName,springBeanId);
            getService().query(task, members);
    }You would need to make the above class properly implement POF of whatever you are using for serialization.
    You then declare a remote invocation scheme like you have for the remote cache scheme in cluster 1
    <remote-invocation-scheme>
        <scheme-name>tier-2-proxy-invocation-scheme</scheme-name>
        <service-name>ExtendTcpProxyInvocationService</service-name>
        <initiator-config>
            <tcp-initiator>
                <remote-addresses>
                    <socket-address>
                        <address>localhost</address>
                        <port>6005</port>
                    </socket-address>
                </remote-addresses>
                <connect-timeout>20s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
                <request-timeout>20s</request-timeout>
            </outgoing-message-handler>
        </initiator-config>
    </remote-invocation-scheme>Then in cluster1 you can do...
    InvocationService cluster2Service = CacheFactory.getService("ExtendTcpProxyInvocationService");
    service.query(new RemoteLoaderInvocable(cacheName, springBeanId), null);That will allow you to execute your LoaderInvocable from cluster 1 against storage enabled members in cluster 2, but as I said, I don't see what you are really trying to do.
    JK

  • Lag after pre-loader hits 60% and before movie starts playing.

    Hi all,
    I’m hoping somebody can shed some light on an issue
    that I’m experiencing with a Captivate project I’m
    working on. I’ll try to explain this in detail so you
    understand the background and scope of this project. These are
    being exported as HTML/SWF and being viewed within IE6.
    I’ve been tasked with taking a new employee training
    course that is currently handled by a live person standing in front
    of a room of people and showing them an 18 minute VHS tape and also
    covering information that is presented on a PowerPoint file, and
    making it into a Captivate movie.
    I was able to get the original digital video cut into 4
    chunks and converted to SWF by media group that produced the tape.
    The four SWF files are 320x240 and range from 13 to 22 megs.
    Because of the size of the files and the amount of content, I
    created 8 ‘modules’ that make up the larger project.
    The last slide of the first module launches the 2nd module, etc.
    until the user has seen all of the modules which on their side
    (other than a brief pre-loader popping up at times) is mostly
    transparent and they think they are just watching a 30 minute
    training course.
    Basically, what I’ve done is laid out the modules so
    that the whole course alternates back and forth between text slides
    with voice-overs and the movie clips. The movie clip SWF’s
    have been embedded into every other module, so:
    Module 1: Intro to the course (text & voice-overs)
    Module 2: Video clip 1
    Module 3: Discussion about stuff (text & voice-overs)
    Module 4: Video clip 2
    Module 5: Discussion about stuff (text & voice-overs)
    Module 6: Video clip 3
    Module 7: Discussion about stuff (text & voice-overs)
    Module 8: Video clip 4 and wrap-up
    Now here is the problem:
    We have 20+ offices and I am located at the main office where
    the web servers are located. If I run through this, everything
    works great. I tested this in one of our off-site locations today
    where they have a semi-fast connection, but still going across a
    WAN.
    Module 1 loads fine. Hits 60%, plays, user can click Next,
    Next, Next to proceed through the training. At the end of Module 1,
    we prompt them with some text to let them know there may be a brief
    delay before they see the upcoming video clip. Click Next, loads
    Module2.swf which has the videoclip1.swf embedded in it (21 megs).
    Takes a little while to load which is expected. Hits 60%.
    Preloader goes away and then just white screen for at least 10
    seconds (sometimes 20-30 seconds, depending on the size of the
    embedded movie clip) before the movie actually starts playing.
    This is not acceptable as the user will think that something is
    broken. If the movie is not completely loaded, then it
    shouldn’t have hit 60% and the preloader graphic disappear,
    right?
    I think that the problem revolves around the movie clip not
    being able to play until it is fully loaded, although the Module
    has loaded the 60% that it needs to begin playing. The only way
    around this that I see is re-arrange my slides so that the video
    clip is not the first slide to play in any given module.
    We tried this on at least 7 computers, all of which were just
    rolled out new last fall, so the machines are definitely beefy
    enough to handle this.
    Any ideas?
    Thanks,
    Mike

    quote:
    Originally posted by:
    CatBandit
    I just read your title, Mike (I don't have time for the whole
    article - sorry), but felt it might help to point out that 60% is
    40% less than 100%.
    Nope, I am not being a wise guy, but I know it's possible to
    have so much "weight" in the content of the last 40% that
    additional delay might be required while a (early) required element
    or object completes down-streaming. Just a thought...
    Hi Larry,
    Good to see you back on the forums and thanks for the reply.
    What you posted further proves my point. From what I've been taught
    about Captivate, it loads to 60% and begins to play while the
    remaining 40% loads. Not loads to 60% and then some portion of the
    remaining 40% may or may not have to load before it starts. If
    that's the case, why not just load 100% before it starts and then
    have no problems?
    It shouldn't matter how large the file is, right? The only
    difference is that larger files would take longer to reach the 60%
    mark. 60% is 60% is 60%. Compare a feather to a car. 60% of the
    weight of each item is still 60%, it doesn't matter how much it
    weighs.
    60% of a cubic foot of rock is the same percentage
    as 60% of a cubic foot of marshmallows.
    Anyway, I think the solution here is to fudge the problem by
    putting a few 'less weighty' slides in front of the slide with the
    embedded video so that the user has something to look at other than
    a blank screen. I was just hoping somebody else might have
    experience this and had a magic button I could press to make it all
    better.
    I could handle this if the preloader continued running so the
    user had some clue that it was still working, but when the
    preloader hits 60% and disappears and then nothing happens for
    another 10-20 seconds, that's a problem.

  • Can I install Snow Leopard on the latest Macbook Pros? (the ones pre-loaded with Lion)

    The problem and solution is pretty simple, I just want to know if anyone has tried this before. I have a brand new Macbook Pro that I bought more or less for the sole purpose of having a more powerful machine to run AVID Media Composer on. AVID is only compatible up to OS version 10.6.7 at the moment, and the machine I got was pre-loaded with Lion, 10.7....So I look at the support documentation Apple provides, and notice that in the nifty little chart they have, the latest line of Macbook Pros out there (early 2011) originally had version 10.6.6, so I'm assuming that's the previous version I can't downgrade past.
    To revert back to Snow Leopard, however, I need to install it from the DVD which has version 10.6.3 on it, and then upgrade to any version between Snow Leopard 10.6.6 and Lion 10.7. In theory this could work, the only problem being that for a brief time between installing Snow Leopard and updating to the version of it that I need, the computer will have version 10.6.3 on it.
    Now I'm pretty sure if the only thing I do on the computer is immediately update to a safe-to-use version, there will be no problems. However, if the machine's hardware is so terribly non-backwards compatible with the Snow Leopard OS, I may do all this backing up and reverting and not even be able to start the computer once I get the old install on it. Before I just go ahead and try this for myself, I was wondering if anyone else has, and more importantly, have you had any success?

    Hi r,
    EDIT: disregard my post. Waiting for that disc is a far better option.
    I hope w won't mind if I add a thought here:
    rmo348 wrote:
    Now I don't mind if some drivers are messed up and resolution is all funky when I install 10.6.3 on it, I just need to know if it'll be functional to the point where I can run the 10.6.6 update dmg. Once I update to 10.6.6 everything should work fine.
    Or could I install 10.6.3, have the 10.6.6 update burned to a disc, and boot straight from that? This is my first Mac so I'm not sure what little tricks work or not.
    Your first idea may work; the only way to know for sure is to try it. If you do, make sure you download and run the Combo update for 10.6.7 or 10.6.8. There can be different versions of a point update, those which are available for download, and those which ship on Macs, so you want to install one beyond that which shipped with some of the new MBPs.
    If the MBP won't boot to 10.6.3, something else to try is installing it to an external HD, then installing the 10.6.7 update on it, clone it to the MPB's internal HD, and run the 10.6.7 or 10.6.8 Combo update on it.

  • Dual boot Win7 pre-loaded and XP

    I have a T510 that came pre-loaded with Win7 Professional.  I'd like to dual boot with XP that I have loaded on a different hard drive on the same T510.
    I do have the recovery disks.  I can't do a fresh install of XP because it have over 300 programs installed.  Is there some way to run the recovery disks with Win7 Pro to not format the XP partition.  Can I make the XP partition read only and then run the recovery disks?
    Maybe someone knows of a way to install the recovery disks, then ( I know how to move Win7 and make a partition before Win7 then restore XP to the hard drive) then repair Win7 so it has the correct drive letters.
    Thanks,
    Docfxit

    Docfxit wrote:
    That's a good idea.  I might end up doing this.  The problem I have is I can't get all the Lenovo setup/programs that come with the restore disk to load.  I know I can get some of them loaded with System Update.
    What setup/programs will you be missing?  I run a vanilla WIn 7 Pro 64 on my T400 and  used TVSU to pull down everything that I need.
    It may be that some multimedia stuff will be missing, but grab the installers for them from your Lenovo Win 7 setup before wiping it or removing that drive.  Probably in C:\SWTOOLS (at least they were there in the old days. I don't have a Lenovo '7 load to examine.)
    I'd probably do something like this, since I can't resist messing with a multiboot machine:  clone the XP install to a large HD - or use the one you have if it 's big enough.  Shrink the XP C: partition with your favorite tool.  Fire up a MS '7 install DVD and install into the free space.  I'm told you can use the Win 7 activation code that came w/your machine, but you may have to call MS.
    Install TVSU for Win 7 and let it pull down the rest.  Come to think of it, that's what I did with my XP T61.
    Z.
    The large print: please read the Community Participation Rules before posting. Include as much information as possible: model, machine type, operating system, and a descriptive subject line. Do not include personal information: serial number, telephone number, email address, etc.  The fine print: I do not work for, nor do I speak for Lenovo. Unsolicited private messages will be ignored. ... GeezBlog
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

  • Ultrabeat - how to use it as a drum machine ... any ideas / help

    i want to use logic's ultrabeat as a drum machine/sequencer ... are there any manuals online or can anyone offer help?
    thanks

    I humbly recommend reading the fine manual that came with LE. I found there a vast supply of information regarding Ultrabeat.
    The most important thing of course ist the step sequencer and it's pattern library. you can find it to the lower left of the UB Interface. These preset patterns can be replaced by your own, so quick composing of an UB Drum Sequence is possible. Simply select and draw them to a MIDI region to insert them into your song.
    For this, UB works much the same than most step sequencer drum machines.
    You can also trigger certain patterns with a MIDI Event. Say, you use the lowermost octave of your MIDI Keyboard to trigger the patterns, you can use UB as a live performance drum machine. There are many ways to do this - just look into the manual (can't remember it all).
    The next thing ist: UB lets you synthesize your own drum sounds. It's a comlete Synth with different LFO / Filters and so on. Lots of technical stuff - one **** of a machine for those who know how to tweak. And you can also load your own drum samples into it.
    So much for the moment. Sorry for the RTFM, but i think UB with its complexity needs you to do this.
    Cheers + grooves,
    Fox

  • How can I choose ringtones instead of pre-loaded sounds for alarms

    I have a bunch of questions. First, how can I choose ringtones (my last phone had a category called "tones") and other purchased sounds instead of the pre-loaded sounds for my alarms, text message alerts, etc.? Thanks.

    Apple no longer sells ringtones... Besides you never needed to buy them because they're easy to make! All you need is DRM free music (burned from cd, iTunes +). Read this link for instructions:
    http://www.ehow.com/how2160460custom-iphone-ringtones-free.html

  • Oracle Pre-Load Actions error

    HI All,
    I am trying to install SAP R/3 4.7 IDES, Oracle 9 with patch 9.2.0.3.0 on 2003 Server SP2. I have encountered the following error during Database Instance installation step of Oracle Pre-load Actions .
    CJS-00084 SQL Statement or Script failed. Error Message: Executable C:\oracle\ora92/bin/sqlplus.exe returns 3.
    ERROR 2007-06-13 10:25:46
    FJS-00012 Error when executing script.
    I have already tried following solutions.
    1) directory C:\oracle\ora92/bin/sqlplus.exe is there I able to connect as sysdba
    2) I changed “C:\SAPinst_Oracle_KERNEL" name with no spaces
    3) Installed Oracle without installing Database.
    I think I am making a mistake in following two steps.
    1) I did installed Ms Loop Back adapter but what exact entry I need to put in Host file?
    2) When I installed Oracle and run “Sapserver.cmd” file, windows message popup and ask which program I want to use to open “Sapserver.rsp” file and I choose notepad.
    After Oracle installed, I run some Sqlplus commands and able to connect as sysdba but listener status command showing one listener is running and other status unknow. After installation stop, Listener status giving TNS adapter error.
    Please help.

    >> have already tried following solutions.
    >>1) directory C:\oracle\ora92/bin/sqlplus.exe is there I able to connect as sysdba
    >>2) I changed “C:\SAPinst_Oracle_KERNEL" name with no spaces
    >>3) Installed Oracle without installing Database.
    never change the directory name of SAPinst working directory. It is designed in this version to use a directory with spaces and it will work with it.
    What you should not do is to copy additional CDs (like Export, Kernel) into directories having blanks in the path.
    >>1) I did installed Ms Loop Back adapter but what exact entry I need to put in Host file?
    >>2) When I installed Oracle and run “Sapserver.cmd” file, windows message popup and ask which program I want to use to open “Sapserver.rsp” file and I choose notepad.
    That should not happen at all.
    The response file is opened by Oracle's setup.exe which is invoked by sapserver.cmd. Is the Oracle CD located on a directory containing blanks?
    Peter

Maybe you are looking for

  • How can I get my voice memos off of my iPhone?

    I have a new iPhone and took some pictures and recorded a few voice memos.  I would like to save these before I restore my iPhone to the back up I have on my computer.  I was able to import the photos into iPhoto, but I am not sure how to import the

  • Windows Update Error 80080005 Windows 7 Client with WSUS 3.0 SP2 server

    Hello everyone, I've looked around at other threads related to my issue, and I can't seem to find a correct answer. I have a computer lab that I've recently installed a Windows 7 Enterprise image onto 35 Dell Optiplex computers (most are 380's, a few

  • Footer record in File Receiver channel

    Hi PI Gurus. I am a novice in PI. We have an IDOC to file scenario. After the file is generated, we want to add a footer to the file which will have some information about no of records generated. Is it possible to do so? If so, how can I achieve it?

  • Querying Dates

    Hi all, I am attempting to output some calendar dates from a database. I have a field called StartDate in the table formatted as mm/dd/yyyy. I'd like to output "future dates" for each month, so I've written this query: <cfquery name="January" datasou

  • Hola estaba bajando el Montan lion y ya no me deja

    desde el App  Store empese bajando el Montan Lion que ya lo tenia comprado desde que salio, (habia formatido al OSX 10.6.8 que venia de fabrica ) cuando vi que no avanzaba  reinicie, quedo en pausa, fui a app store y en compras me dirigi a Montan Lio