Large flash size problem
In China, I found that an animation-like flash in 2MB on our
website takes more than one minutes to download to clients' PCs.
Users have to wait until the download of the whole flash program
complete. Could we cut/split the flash into smaller segments such
that we can start playing the first segment while the download of
remaining segments is in progress?
If yes, how can we do it while keeping the animation runs
smoothly without client's intervention?
Thank you for your help :)
Best regards
Wilson
Hey
Wilson... it loads fine to me. So I'm not sure I know what
you're asking... but if I understand it, what I would do it have
the timeline start loading in movies as it plays. So say the entire
film is 100 frames. Have the first part be just 10 frames, and at
frame 5 have it start loading in the movie that will take-over at
frame 10 to 20... then at 25 have it load 30 to 40... and so on and
so on. Have them loaded in the background of the movie, so the main
movie is on level 3, the loaded movies are on 2 underneath it. Then
at frame 10, 20, 30 etc it would call the level 2 to move up to
level 3 and start playing.
Does this make sense?
Another way to do it would be to use a pre-loader that can
start playing the movie after a certain amount has been loaded.
I hope this is what you're asking about... and hopefully it
will help.
Similar Messages
-
Large email size problem-Help me
Hello Friend,
We have 4 system with sharing internet broadband connection through 8 pin connector.my problem is large email size is not outgoing through outlook express. (Outlook express 6)More then 50 kb is not going.In this main system was large size eamil attached file is easily going and ftp file also uploaded.In Other 3 system is not outgoing email and ftp upload also is not working.Please help me and give some solution very urgently.I'm very sorry, I can't help you at all with your FTP problems. Likely, i'll not be able to help you with your other problem, either. I can't tell what software you're using, or even if SunJavaEnterprise Messaging Server is involved at all.
I suspect that you are having some kind of timeout/network problem. I have no idea what your "8 pin connector" means, nor what else you're doing. I'd be calling your ISP first, for their help. -
Web Flash Gallery: Problem setting Large Images size
When using the Lightroom Flash Gallery I am not able to change the size of the Large Images (Web | Appearance | Large Images | Size). When I set the size to Medium or Small, the large images always ends up with the same size as when set to Large. Only when setting the size to Extra Large there is a slight change in size.
How can I change the Large Images size to Medium and Small?
Jørgen Johansonjools2005 wrote:
Adobe please address this issue.
This is a User to User forum not official Adobe support. Yes, Adobe staff may participate from time to time, but not often enough that you can depend on them reading your post. Therefore, I suggest that you support a bug report using the facilities provided at https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform -
Contexts: I am very competent with GFX and have worked on many projects.
When dragging an image onto a fresh canvas, the resizing feature presnaps the image into the canvas size. I can confirm that this has not happened to me in previous projects. I've also tested this problem by making a new larger document size and dragging in the same render. It snaps again. I don't want to re-size from the canvas dimensions, I want to resize from the render's original dimensions.Ap_Compsci_student_13 wrote:
ok sorry but doesn't sscce say to post the whole code?the first S is Short, and yes, it needs to be all the code needed to illustrate the problem and let use reproduce it. If you were developing a new Web framework, your example might be short, but for a student game it's very long. -
Am using Flash Pro CS5 and although the documentation for Flash Player 10 at the link below
http://helpx.adobe.com/flash-player/kb/size-limits-swf-bitmap-files.html
says that the max is 4050 by 4050.
We are trying to run a swf in a monitor array of 3840 by 4320 but in the editing of the document properties the limit is only 2880 by 2880.
Has this been fixed in FLash CS6?
Does anyone have any advice?
We are stumped.Originally were using FP11.2 and then tried FP11.1 and went back to FP11.2 but couldn't load the swf at full stage with either. We can load a 1920 by 2160 stage but it doesn't look to be the right size - but were able to load the swf with the smaller stage size.(That is a separate issue though - the non-exactness of the stage dimensions.)
We are attempting to output to a two by four monitor array aiming for a 3840 (1920 x 2) by 4320 (1080 x 4) and the large stage size should be possible according to the specs but no go. -
Problems printing to large paper size
Hi,
Using Acrobat 9
In my work, I use different programs for floorplans and renderings. I send to my Customers the documents as PDF file , and usually the scaled floorplans are in large paper size (22x34 or up)
With one of the CAD programs, all works fine with paper size up to size C, but if I choose, for example, Ansi D, I see a screen "Creating Adobe PDF" but nothing happens, the bar doesn't move and the file is not printed / saved
I'd appreciate any idea to solve this issue
TIAThere are a couple of things you can try; 1.) see if you have enough system memory to process the page ( it could be running out of system memory ); 2.) set an invisible holding line to the page / document and lock it ( no stroke, no fill @ 13 x 19 ); 3.) Use the Page tool ( if you haven't already ) and set it to the document, edge-to-edge. Also, check you ink supplies, there may be an empty cartridge preventing you from printing the entire file. Hope this helps.
-
Software distribution problem with RME 4.2 regarding flash size
Hello I'm updating the software on about 200 3750 stacks. During the verification process I'm used to get a warning with the messages swim1132,1201 and 1106. But jobs run then successfully.
But sometimes the verification step fails with the SWIM1200 error. That is a litte strange because all the switches have the same flash size of 15998976 bytes.
The error messages are in the attachment.
I woult be very happy, if somebody can give me an advice.
Markus Schwendimann
Basel, SwitzerlandThis is CSCsu49349. The bug is fixed in RME 4.3. The workaround is to go to RME > Software Mgmt > Software Repository, then click on the name of the image. Reduce its required flash size down one setting (i.e. make the requirement 8 MB). This will allow the image to deploy properly.
Please support CSC Helps Haiti
https://supportforums.cisco.com/docs/DOC-8895
https://supportforums.cisco.com -
Leopard to Snow Leopard can i use a large flash drive to backup first?
I've been using Leopard for 4 years with no problems.
Since Google decided to stop support for Leopard, and I use Chrome a lot I decided to upgrade. I won't be upgrading to Mountain Lion right right away. I ordered my Snow Leopard disks from Apple.
I understand that I need to backup. I want to backup with Time Machine before I upgrade the OS, but I don't have $75+ right now to purchase an external hard drive for this purpose only.
Can I use a large Flash Drive to run Time Machine? Like a 32 GB? I know that flash drives have a limited number of writes on them and can go bad, so I am thinking I could just use it this one time to make sure I have a clean backup. If this is possible, then can I set up Time Machine again later when I have an external drive?
Also, I have 2 cores on my Mac, but I have no idea how to use the 2nd core.
I'm also not sure if I need to purchase a specific external drive because I have a mac.
Thanks for the help.If you only need to make a full backup prior to upgrading, I would suggest using SuperDuper, which for creating a full, bootable clone is free. (It is not free when you start needing to make incremental backups, but you won't be doing that, at least not right now.) For a bootable clone, you only need a Flash drive (or external drive) as big as the total size of the utilized space on your hard drive right now, not as big as whatever the entire drive size is.
In the past, I would have recommended Carbon Copy Cloner for this, but it's no longer free or donation-ware and you don't have the budget for both that and some kind of external or Flash drive.
http://www.shirt-pocket.com/SuperDuper/SuperDuperDescription.html
You might also want to look around newegg.com for inexpensive USB external drives, or even large enough Flash drives. You'll probably find a much less expensive external hard drive than a Flash drive for this purpose.
You do not need to use Time Machine, but even if you did you could just make one full backup to a Flash or external drive and then unmount and disconnect it. An external for TM only needs to be so large because it's constantly taking snapshots of your drive and those add up. But you won't be using it that way.
Anyway, I think you're much better off with a bootable clone, should you need to restore that.
Don't worry about having 2 cores; that's nothing to do with backing up. -
I am trying (and failing) to utilize large page sizes on a Solaris 9 machine.
# uname -a
SunOS machinename.lucent.com 5.9 Generic_112233-11 sun4u sparc SUNW,Sun-Blade-1000
I am using as my reference "Supporting Multiple Page Sizes in the Solaris� Operating System" http://www.sun.com/blueprints/0304/817-6242.pdf
and
"Taming Your Emu to Improve Application Performance (February 2004)"
http://www.sun.com/blueprints/0204/817-5489.pdf
The machine claims it supports 4M page sizes:
# pagesize -a
8192
65536
524288
4194304
I've written a very simple program:
main()
int sz = 10*1024*1024;
int x = (int)malloc(sz);
print_info((void**)&x, 1);
while (1) {
int i = 0;
while (i < (sz/sizeof(int))) {
x[i++]++;
I run it specifying a 4M heap size:
# ppgsz -o heap=4M ./malloc_and_sleep
address 0x21260 is backed by physical page 0x300f5260 of size 8192
pmap also shows it has an 8K page:
pmap -sx `pgrep malloc` | more
10394: ./malloc_and_sleep
Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
00010000 8 8 - - 8K r-x-- malloc_and_sleep
00020000 8 8 8 - 8K rwx-- malloc_and_sleep
00022000 3960 3960 3960 - 8K rwx-- [ heap ]
00400000 6288 6288 6288 - 8K rwx-- [ heap ]
(The last 2 lines above show about 10M of heap, with a pgsz of 8K.)
I'm running this as root.
In addition to the ppgsz approach, I have also tried using memcntl and mmap'ing ANON memory (and others). Memcntl gives an error for 2MB page sizes, but reports success with a 4MB page size - but still, pmap reports the memcntl'd memory as using an 8K page size.
Here's the output from sysinfo:
General Information
Host Name is machinename.lucent.com
Host Aliases is loghost
Host Address(es) is xxxxxxxx
Host ID is xxxxxxxxx
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
Manufacturer is Sun (Sun Microsystems)
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
System Model is Blade 1000
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
ROM Version is OBP 4.10.11 2003/09/25 11:53
Number of CPUs is 2
CPU Type is sparc
App Architecture is sparc
Kernel Architecture is sun4u
OS Name is SunOS
OS Version is 5.9
Kernel Version is SunOS Release 5.9 Version Generic_112233-11 [UNIX(R) System V Release 4.0]
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
Kernel Information
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
SysConf Information
Max combined size of argv[] and envp[] is 1048320
Max processes allowed to any UID is 29995
Clock ticks per second is 100
Max simultaneous groups per user is 16
Max open files per process is 256
System memory page size is 8192
Job control supported is TRUE
Savid ids (seteuid()) supported is TRUE
Version of POSIX.1 standard supported is 199506
Version of the X/Open standard supported is 3
Max log name is 8
Max password length is 8
Number of processors (CPUs) configured is 2
Number of processors (CPUs) online is 2
Total number of pages of physical memory is 262144
Number of pages of physical memory not currently in use is 4368
Max number of I/O operations in single list I/O call is 4096
Max amount a process can decrease its async I/O priority level is 0
Max number of timer expiration overruns is 2147483647
Max number of open message queue descriptors per process is 32
Max number of message priorities supported is 32
Max number of realtime signals is 8
Max number of semaphores per process is 2147483647
Max value a semaphore may have is 2147483647
Max number of queued signals per process is 32
Max number of timers per process is 32
Supports asyncronous I/O is TRUE
Supports File Synchronization is TRUE
Supports memory mapped files is TRUE
Supports process memory locking is TRUE
Supports range memory locking is TRUE
Supports memory protection is TRUE
Supports message passing is TRUE
Supports process scheduling is TRUE
Supports realtime signals is TRUE
Supports semaphores is TRUE
Supports shared memory objects is TRUE
Supports syncronized I/O is TRUE
Supports timers is TRUE
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
Device Information
SUNW,Sun-Blade-1000
cpu0 is a "900 MHz SUNW,UltraSPARC-III+" CPU
cpu1 is a "900 MHz SUNW,UltraSPARC-III+" CPU
Does anyone have any idea as to what the problem might be?
Thanks in advance.
MikeI ran your program on Solaris 10 (yet to be released) and it works.
Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
00010000 8 8 - - 8K r-x-- mm
00020000 8 8 8 - 8K rwx-- mm
00022000 3960 3960 3960 - 8K rwx-- [ heap ]
00400000 8192 8192 8192 - 4M rwx-- [ heap ]
I think you don't this patch for Solaris 9
i386 114433-03
sparc 113471-04
Let me know if you encounter problem even after installing this patch.
Saurabh Mishra -
I've got an HP 8750 printer running with a Windows 7 Ultimate machine, and I've got a paper-size problem where it keeps telling me the paper is too big. It's set up for 11 x 17.
The actual wording on the printer is:
Paper installed is larger than needed. Press cancel to replace with correct size, if you wish to save paper, or press check mark to continue.
When you press the check mark it feeds the paper, but for a multi page-document I have to press the check mark for each page to feed.
How do I get it to print 11 x 17 paper without having to press the check mark for each page?''guigs2 [[#answer-672422|said]]''
<blockquote>
NoScript stops cookies, please disable this addon/extension as well as make sure that the language en-us is installed.
# 1) Open up the Firefox Preferences tab. You can do this by typing about:preferences in the URL bar.
# 2) Click "Content"
# 3) Next to "Languages", click "Choose"
# 4) Select "English/United States [en-us]", click "Add"
# 5) re-open "about:accounts"
# 6) Click "Get Started"
</blockquote>
Thank you for replying. Unfortunately, I already did all of these things. As you can see from the below screenshot, the language is already set. Also, this screenshot was taken in Safe Mode, so NoScript is not enabled. About:accounts still says I need to enable cookies for some reason. So, this solution didn't work.... -
New FAQ Entry on JVM Parameters for Large Cache Sizes
I've posted a new [FAQ entry|http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#60] on JVM parameters for large cache sizes. The text of it is as follows:
What JVM parameters should I consider when tuning an application with a large cache size?
If your application has a large cache size, tuning the Java GC may be necessary. You will almost certainly be using a 64b JVM (i.e. -d64), the -server option, and setting your heap and stack sizes with -Xmx and -Xms. Be sure that you don't set the cache size too close to the heap size so that your application has plenty of room for its data and to avoided excessive full GC's. We have found that the Concurrent Mark Sweep GC is generally the best in this environment since it yields more predictable GC results. This can be enabled with -XX:+UseConcMarkSweepGC.
Best practices dictates that you disable System.gc() calls with -XX:-DisableExplicitGC.
Other JVM options which may prove useful are -XX:NewSize (start with 512m or 1024m as a value), -XX:MaxNewSize (try 1024m as a value), and -XX:CMSInitiatingOccupancyFraction=55. NewSize is typically tuned in relationship to the overall heap size so if you specify this parameter you will also need to provide a -Xmx value. A convenient way of specifying this in relative terms is to use -XX:NewRatio. The values we've suggested are only starting points. The actual values will vary depending on the runtime characteristics of the application.
You may also want to refer to the following articles:
* Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning
* The most complete list of -XX options for Java 6 JVM
* My Favorite Hotspot JVM Flags
Edited by: Charles Lamb on Oct 22, 2009 9:13 AMFirst of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
indicates the Driver or the DB abends the connection due to a timeout.
Check out the wait_timeout mysql variable on the server and increase it. -
There is a large flashing advertisement at the top of the page. Problem is, this is appearing on pages that shouldn't have any advertising.
== This happened ==
Not sure how oftenHello Jim.
You may be having a problem with some Firefox add-on that is hindering your Firefox's normal behavior. Have you tried disabling all add-ons (just to check), to see if Firefox goes back to normal?
Whenever you have a problem with Firefox, whatever it is, you should make sure it's not coming from one of your installed add-ons, be it an extension, a theme or a plugin. To do that easily and cleanly, run Firefox in [http://support.mozilla.com/en-US/kb/Safe+Mode safe mode] and select ''Disable all add-ons''. If the problem disappears, you know it's from an add-on. Disable them all in normal mode, and enable them one at a time until you find the source of the problem. See [http://support.mozilla.com/en-US/kb/Troubleshooting+extensions+and+themes this article] for information about troubleshooting extensions and theme. You can troubleshoot plugins the same way.
If you want support for one of your add-ons, you'll need to contact its author. -
Video Stalls - Larger File Size
Just downloaded Battlestar Galactica season 2 and the video stalls at full screen. Do not have this problem with previous Battlestar Galactica shows and movies that I have downloaded in the past.
I notice the File Size and Total Bit Rate of the new shows are three times greater in size than the shows I've downloaded in the past.
Using the utility ativity monitor, I notice I'm maxing out my CPU when I go to full screen which I think is causing my video stall.
I also notice that now my previous TV show downloads don't fully fill my screen like that did before the new downloads.
I hope someone might know how I can get my TV shows back to working normal at full screen and why the new TV shows file sizes are larger than before.
Thanks for Your Help
Ricky
PowerBook G4 15" LCD 1.67GHz Mac OS X (10.3.9)Thanks for the reply. Shortly after I posted, I started to relize that Apple was offering better TV Show quality which results in larger file sizes. I also went to my Energy Saver section in System Prefrences and found my Optimize Energy Settings was set for Presentations. I changed it to Highest Performance which resolved the issue.
iTunes 7.0.2
Quicktime 7.1.3
RAM 512MB
Thanks for your help. -
BIP bursting large document size files.
We have BIP and we are running FSG's and bursting them to BIP standalone product. Our files are large and when we process a 500 meg file, the tag on the xml file get removed. Is there away to burst large reports without this happening? Or what is the size limit. We are currently creating 10 reports with different sizes to get the information from EBS to BIP.
Any help with will be appreciated.
ThanksThanks for your response.
Apologies, I've not explained myself well here. The majority of the information within the large pdf has come from word files although not all of it. There are also visio drawings and AutoCAD drawings. Generally with the AutoCAD drawings, I receive them already pdf'd and it is I suspect these which cause the large file size (they often contain OS mapping data or ECW files). I guess this explains the main reason I used pdf in the first place, it's software that can convert many diffferent file types into the same format, like in this instance. Unfortunately I am unable to post an example as the information relates to a government agency and the content may be deemed sensitive.
The large file is essentially an electronic version of a suite of 6 hardcopy ringbinders. The obvious answer is to break down the document into smaller files (i.e. one for each volume of the suite). The problem here is that there are links within the file whereby references to other volumes are linked, taking the user straight to the section referred to. If I break the large file down I will lose this functionality, unless there is a way to link to a specific section of another document? If so, please enlighten me how this works.
Hopefully I've explained things a little better now but whether my explanation helps find a solution I don't know. -
Is there a solution to Rapidweaver Stacks Cache size problems?
Is there a solution to Rapidweaver Stacks Cache size problems?
My website http://www.optiekvanderlinden.be , has about 100 pages and is about 250MB in size. Every time I open RapidWeaver with the Stacks-plugin, the user/library/cache/com.yourhead.YHStacksKit folder grows to several Gigabytes in size. If I don't manually empty the com.yourhead.YHStacksKit Cache daily, AND empty the Trash before i turn off my mac, my Mac still has problems and stalls at startup the next day. Sometimes up to 5 minutes and more before he really boots. If i have a been working on such a large site for a few weeks without deleting that cache, than my iMac doesn't start up anymore, and the only thing left to do is completely reinstall the iOs operating system. This problem doesn't happen the days i use all my other programs but don't use Rapidweaver.
Does anyone know a solution and / or what is the cause of this?once more,
i just launched my project in Rapidweaver, and WITHOUT making any changes to my pages, the cache/com.yourhead.YHStacksKit folder filled itself with 132 folders (total 84Mb), most of them containing images of specific pages etc. The largest folder in the cache is the one with the images of a page containing a catalogue of my company, this folder is 7Mb. It seems by just starting up a project, immediately different folders are created containing most of the images in my project, and some folders with a log.txt file. Still i don't see the reason why it is filling op a huge Cache even without making changes to the project. I have 2 different sites, so 2 different projects, and i get this with both of them.
Maybe you are looking for
-
How do I use VLC web plugin 2.0.6.0
I,m trying to watch some RTSP files. A google search turned up how to configure VLC to play these files. I followed the instructions and it worked although with a clumsy copy and paste the video URL proceedure. I then found a suggestion that this plu
-
How can I rename many files at the same time?
Hello! I have a bunch of PDF files (2 thousand at least) that I need to transfer to another Mac but since it's running OS 9.2 it won't accept the files because the names are too long, therefore, I need a quick way to change all such files into a shor
-
How to do configuration of ESS HELP SERVICE
Hi All, As it is very urgent, please respond immediately. Thanks a bunch in advance Priya
-
I have designed a report using IReport tool with some text as BOLD, which generates report in PDF format. when i open the file with Adobe Reader IX it opens perfectly. but the problem is when i open the same file in mozilla add-on "Adobe Reader 11.0.
-
Trying to make a base image. The whole process seems to work, all process work. Discs downloaded, etc, gets to the end and says workstation has not work to do. I check the properties of wrst object and the take an image on next boot is checked. There