Compositing Challenge
Hello. My son and I are going to shoot a promo for his website. He's a 9th grader, and I'm a single Dad, so we're trying to do this on the cheap. I'm wondering if folks out there can give advice about how we might be able to use After Effects to solve a problem. In this video, we will be crushing an iPad that is playing an animation. Well, we won’t actually be crushing a working iPad. I have the glass front and the aluminum back panel. No board. No LCD panel. So, the glass is clear, which is perfect. To make this work, I’m going to put a green screen inside the glass of the partial iPad and, in After Effects, replace the green screen with the animation. This way, when we crush the iPad, it will look like we’re crushing a working iPad with the animation running on it. The crush will be so quick that we can, in post, remove the green screened image. If anything, it will look like the screen went blank as the foot hit it. That’s fine. The problem I’m having, however, is that when crushing under foot an iPad with a green screen inside of it, the green fabric (or paper) will be seen clearly in the pile of broken iPad scraps left on the ground. Do you follow so far? Do you understand the problem?
I’m writing to you for your advice on how to solve this.
Options I’ve come up with so far:
Put a green plastic film inside of the clear screen. When the foot touches the top of the iPad (it will be in a stand so it is vertical), the image of the animation is turned off. Problem with this approach:When the animation is removed from the green screen, we’ll be left with a green screen as the foot comes down. We’ll also see the green plastic on the ground in the pile of mess that was the former iPad. Question: if we use a green film, can we replace the green with black in post-production so that the screen looks black when the animation is removed and, also, so that the sheet, when lying on the ground, looks black, which the viewer would interpret as being merely the black insides of an iPad (the LCD film, perhaps?)?
In After Effects, add a color layer and shape it to fit over the clear glass screen. The color would be green, and that layer would then serve as the green screen we’d composite out and replace with the animation. When the foot touches the top of the iPad, we could turn off the compositing (which would remove the animation) and replace the green with black. We could then artifically shatter that layer as the foot progresses through the stomp. Problem with this approach:I don’t know if you can create an artificial green screen in After Effects using layers. Do you think this could be done?
Find a way to affix green powder or paint to the insdide of the clear glass iPad screen. Then, turn off the composited animation as the foot stomps and replace it with black as the glass shatters.Problems with this approach:I think we’d wind up with either green shards (if we can’t carry the compositing through after the glass has shattered) or black glass shards if we can carry it through. Black would be acceptable. Green wouldn’t. I wonder if we could replace green with “clear?”
Any advice you could offer would be really appreciated by a kid trying to make a hit. :-)
Thanks.
Glenn
theglennotf wrote:
No need for compositing. Am I on the right track here?
Sorry, no. You face a good deal of work. Compositing means much more than simply pulling a chroma key.
You'll have to be careful when you shoot to get a "clean plate" of the unaffected prop. Just to have it. You may not even use it, but it's there in a pinch. You'll end up using the same shot in many different way, and on many different layers in AE.
You'll create an animated mask around the foot: the rotoscoping part.
You'll use Mocha to cut the hole for the screen, behind which you'll put the new display.
You'll use the Mocha Layer again to display the screen only, so that you can use its reflections and its general look to make the new display look more realistic.
Since this device will doubtlessly move as the foot steps on it, you'll have to move the display layer accordingly, either via motion tracking supplied by Mocha, using AE's built-in Motion tracking, or animating the display's position by hand.
So while there's no green-screening, you can see it takes a lot to pull off a realistic-looking shot.
I should also mention that if you're an AE novice, this isn't the type of work novices are equipped to undertake. AE really does require a firm grounding in the basics before moving to the fun stuff. Sidestepping the basics almost always results in wasted time, wasted effort and frustration.
Here's a good place to begin learning the basics, and it's all free:
http://blogs.adobe.com/toddkopriva/2010/01/getting-started-with-after-eff.html
Similar Messages
-
I have a basic reference element loaded as a plugin, which is able to retrieve information about, pause and play video elements while displaying its own overlay content as well. What it's not currently doing, is automatically positioning itself to the same location and dimensions of the video element that it references. For some reason, when I try using the layout API to set position and size in the video element metadata, this is not retrieved by the reference element (it returns a null value for the target video element's metadata).
I wanted to try a different approach, specifically, creating a parallel composition that works as follows:
1) When a video element is created in the factory, it is automatically wrapped in a parallel composition element along with a reference element, which is passed the video element as a target.
2) The reference element sets its width and height with the width and height properties of the video element spatial trait and will listen for any changes to that width and height to adjust accordingly.
3) Now that I think about it, the parallel composition should itself be a proxy for the video element, so other code in the player that moves, resizes, or otherwise alters the dispay of the video element, will if fact be adjusting the whole video element + reference element parallel composition.
In other words, I want the reference element to be an overlay that is "locked" to the surface of any video element and follows it in size, position, display and even audio traits.
Suggestions for the best way to approach this within the framework?Thanks Wei,
With some more tweaking, I am able to get and use the layout metadata as you said! Here is where my issue stands now:
* My IMediaReferrer element can now look at the layout metadata of the target media element and copy those values, so it has the same width, height, x, and y properties. This is great!
* However, particularly for RelativeLayoutFacet metadata, this is only fully useful if both the target media element and my IMediaReferrer element are in the same composition. If they are in different compositions which are themselves placed differently, then even identical x and y values don't add up to the same position on the screen.
So, my challenge is to figure out how to ensure that my IMediaReferrer element is placed in the same composition as the target media element.
Again, the goal is to write a plugin that will have a reference to an underlying video, and will always have the same width, height, x, and y of the video it is overlaying. This plugin should not require any additional coding in the player, but should take care of setting itself up as above automatically when loaded.
There isn't any property on a media element which exposes the "parent" composition element that it is a part of, so I don't know how to get my IMediaReferrer to add itself to the same composition as the reference target automatically. I'm not sure if it's possible to make my IMediaReferrer element extend ParallelElement and still load in a SWF Element as an overlay, and add that SWF Element and the target Media Element as children with identical layout metadata.
Do you have any suggestions on how I should proceed?
Thanks again! -
Don't want new tiff files saved in my catologue if I'm only using the image for a composite.
When editing in Photoshop CS4 from LR3, the new tif will not appear in the catalogue unless I save it in PS. In LR4 it saves a new tiff in my catalogue regardless if I save the edited image in PS or not. I have a cloud library I use often and do not want new files saved in my LR catologue if I'm only using the opened image for compositing. Can anyone help me figure out how to change the workflow to be like LR3?
-Agfaclack- wrote:
i think adobe should support new cameras for older CS versions too.
in the end PS is expensive enough to justifie a better support.
How far back would you go? One version? Three? Where should the line be drawn? Adobe draws the line at the currently shipping version...it's now CS5 and shortly it will be CS6.
Even if Adobe were to do this, you do realize that trying to put new plug-ins into older software is really tough. For example on the Mac, Photoshop CS5 requires that the plug-in be written in Apple's Cocoa API. Camera Raw 5 required a major update to be able to run in Cocoa...retrofitting Camera Raw 5 to run in a PPC platform (supported by CS3/CS4 but not supported by CS5) would be difficult–a lot of work for zero return. As a result of the engineering challenges of backwards compatibility Adobe has the policy of only updating currently shipping software to current customers. Truth be told, if you don't have the most recent version, you are not a current customer...you are a former customer. Having Adobe spend R&D engineering backwards compatibility to support former customers would take resources away from supporting current customers. As a current customer, I wouldn't like that.
But it's all a moot point because I don't see any evidence that Adobe is going to change this policy for backwards compatibility. In fact, going forward, Adobe is already announced the change to Photoshop upgrade rules allowing only the most recent version to be upgraded to the new version. Because of the reaction that only CS5/5.5 could upgrade to CS6, Adobe blinked and now has given a grace period till the end of 2012 for CS3 & CS4 users to upgrade to CS6.
Again many people don't like this, but as a pro user of Photoshop, I always upgrade anyway so it has no impact on me. Yes, it will bite some people and I can have sympathy, but again, you always have the free DNG Converter to use in the case you need new camera support. And hey, the new cameras come with software to process their raw files, right? -
Multi-camera IMAQdx systems: shortcuts for stitched composite image
Imagine a system using for example multiple GigE cameras through the IMAQdx interface where we wish to form a composite stitched image from the multiple camera views. The stitching principle is naive, straigthforward concatenation, one next to another.
The problem is that where it is trivial to build such a composite image, it's difficult to do it very efficiently. The image sizes are large, tens of megapixels, so every copy matters. Alternative hardware configurations would open up a lot of options but say we're stuck using GigE cameras and (atleast initially) the IMAQdx interface. What tricks, or even hacks, can you guys imagine facing this challenge?
I've seen some talk about the IMAQdx grab buffers and it appears to me that one cannot manually assign those buffers or access them directly. The absolute optimal scenario would of course be to hack your way around to stream the image data directly next to each other in the memory, sort of as shown below in scenario1.png:
The above, however, doesn't seem to be too easily achieved. Second scenario then would be to acquire into individual buffers and perform one copy into the composite image. See illustration below:
Interfaces usually allow this with relative ease. I haven't tested it yet but based on the documentation using ring buffer acquisition and "IMAQdx Extract Image.vi" this should be possible. Can anyone confirm this? The copying could be performed by external code as well. The last scenario, without ring buffer, using "IMAQdx Get Image2.vi" might look like this:
The second copy is a waste so this scenario should be out of the question.
I hope this made some sense. What do you wizards say about it?
Solved!
Go to Solution.Hi,
Sorry, the contraints are not really well documented as they are dependent on platform, camera type, camera capabilities, and how the driver handles things. All of these are subject to change and so we decided instead to try to make the errors be very self-descriptive to explain how to fix any requirements.
You are correct that these fundamentally come down to making sure that the image buffer specified is able to be directly transferred to by the driver. The largest requirement is that the image data type is the same and doesn't need any decoding/conversion step. The other requirements are more flexible and change depending on many factors:
- No borders, since this adds a discontinuity between each line. This error doesn't apply to GigE Vision (since the CPU moves the data into the buffer) or to USB3 Vision cameras that have a special "LinePitch" feature that can allow them to pad the image lines. The USB drivers of more modern OSes (like Win8+) have more advanced DMA capabilities so it is possible/likely that this also can be ignored in the future.
- Line width must be a multiple of 64-bytes (the native image line alignment on Windows) - same as border
So, if you end up using GigE Vision cameras, this should just work. If you want to use USB3 Vision you have a few more contraints to work with.
Eric -
Sync and fire-forget process in a single composite
Hi,
I have two composite process, which does the same particular task but in a different way.
1. Sync BPEL process.
2. DB Polling a table, which is a fire and forget service
Can i have both the cases in one singe composite ? Is there any challenges that i would face? Please suggest.
TIAUse 1 composite and 2 bpel processes....
Arik -
Composite patterns in bash for copying
I've used several hours today to try to make a shell (bash) script in order to take backup of some (quite some) directories. The problem (challenge) is that i want to leave some files (the big result files) out.
My directory structure:
Dir/
SubDir1/
smallfile1
smallfile2
bigfile
SubDir2
smallfile1
smallfile2
bigfile
By switching on extglob in bash (shopt -s extglob) it is possible to use socalled composite patterns for wildcards (ref. http://www.linux-mag.com/content/view/1528/43/), and i thought this would help.
Problem 1: I try to type in the following command, standing above Dir/, cp -R Dir/*/!(bigfile) new_location. This does not reproduce the directory structure (which i need) and files from Dir2 overwrites files from Dir1.
Problem 2: Since i didn't know how to solve problem 1, i tried to make a small bash script so that i can be in Dir/ and just write 'mycopy 1' so that 'cp SubDir1/!(bigfile) new_location/SubDir1' is run. Now i get an error message which says that "(" is unexpected.
Now i'm 100% stuck and i really don't want to do it manually...You need to Control click (or right click)in the selected area below so that you will be able to see the Copy/paste menu:
!http://i37.tinypic.com/ehixl.png!
A -
Duplicate a composition layer in the timeline, at same time make new comp
Hi,
I know that in the project panel I can do Control D and make a new comp. I know that in the layer panel I can do control D and make a new layer for a different comp, which keeps all the animations I have applied to that comp.
I also know that after I've duplicated that comp in the Project Panel I can Alt drag it to the other comp in the timeline and replace, or swap the comps.
What I want is to remove a step.
I have Comp A
In the timeline I want to duplicate this comp and at the same time create a new Comp B.
What I want is to be able to duplicate the layer in the timeline, and while doing so also create a new composition which would show up in my project panel. Basically I am trying to skip the extra steps of duplicating layer, going to project panel, duplicating comp, then Alt dragging to swap them.
Is there a shortcut for this?Mylenium, why would you think there should be no shortcut for this?
You are forgetting, that pre-comps in themselves can contain any number of pre-comps inside pre-comps inside pre-comps.... You would have to find a way of taking care of those scenarios which is a major logic challenge and effort. that being so and to avoid potentially risky user operations that might ruin entire projects, it is much safer to not allow this. I would agree with Rick that a simple script that mimics the manual steps and tied to a function key is probably as good as it gets, but still, even there the underlying deeper problem still exists, so you have to work very controlled.
Mylenium -
Composites in ABAP??
Hi,
I wud like to know what r the COMPOSITES in ABAP?
if any body knows abt this concept please do clarify me...
Thanks in advacne.
please any body help me
Raja.
Message was edited by: raja gurralahi..
<b>* Composite Application</b>
Application that integrates various existing applications.
Composite applications represent a new breed of applications that try to meet the challenges as:
*Serve business processes that cross multiple functions
*Target multiple users even across inter-enterprise boundaries
*Integrate functions that were previously supported by independent generic applications
*They are built on top of the company's heterogeneous technology landscape, thus enabling cross-functional business processes and securing existing software investments.
<b>* Packaged Composite Applications (PCAs)</b>
*PCA's are a new paradigm for developing applications. Instead of starting from scratch, PCAs start with existing data and functionality and then coordinate that functionality in different ways to solve new problems.
*For example, developers can build applications by grabbing the customer objects and related functionality from the Customer Relationship Management (CRM) system, the financial information and calculations from the Enterprise Resources Planning (ERP) system, product related information from the Product Management system, and then add whatever new functionality that might be needed from systems like Content Management or Portal or Business Warehouse to get the job done.
*PCAs also add new functions for specialized purposes that sit on top of existing platform.
*The "packaged" part simply means that these applications are products supported the exact same way that the enterprise applications like CRM are supported. Packaging is more significant for customers than UI designers.
<b>*SAP's version of PCAs are called xApps.</b> -
Suppose you are shopping for a new car, and are specifically looking for a big
car with decent gas mileage. Unfortunately, we are trying to satisfy the two
conflicting goals. If we are querying the Cars relation in the database, then we
certainly can ignore all models that are worse than others by both criteria. The
remaining set of cars is called the Skyline.
More formally the Skyline is defined as those points which are not dominated by
any other point. A point dominates the other point if it is as good or better in
all the dimensions. For example Roadster with mileage=20 and seating=2 dominates
Ferrari F1 with mileage=10 and seating=1. This condition can certainly be
expressed in SQL. In our example, the Skyline query is
select * from Cars c
where not exists (
select * from Cars cc
where cc.seats >= c.seats and cc.mileage > c.mileage
or cc.seats > c.seats and cc.mileage >= c.mileage
Despite apparent simplicity of this query, you I'm not pleased with this query
performance. Even if we index the seats and mileage columns, this wouldnât help
much as only half of the records on average meet each individual inequality
condition. Certainly, there arenât many records that satisfy the combined
predicate, but we canât leverage index to match it. Bitmapped indexes, which
excel with Boolean expressions similar to what we have, demand a constant on one
side of the inequality predicate.
There is an efficient way to answer Skyline query. Order all the data by either
seats, mileage, or mileage, seats. Here a composite index might be handy.
Compare each record with its predecessor, and discard the one that is dominated
by the other. Letâs assume that our example has been extended to 4 records:
MANUFACTURER SEATS MILEAGE
Hummer 4 5
Ferrari 1 10
BMW 2 15
Roadster 2 20
Iterating record by record down, Ferrari would be excluded first, because itâs
dominated by the neighbor BMW. Then, BMW itself would be excluded, because itâs
dominated by Roadster.
The challenge is to express this algorithm (efficiently) in SQL. With or without Analytics.Mike:
Performance aside, see if you can reproduce:
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
flip@FLOP> CREATE TABLE CARS
2 (
3 MANUFACTURER VARCHAR2(10 BYTE),
4 SEATS NUMBER(2),
5 MILEAGE NUMBER
6 );
Table created.
Elapsed: 00:00:00.00
flip@FLOP>
flip@FLOP> insert into cars
2 select to_char(object_id) , mod(object_id,7)+1, mod(object_id,25)+25
3 from all_objects where rownum < 101;
100 rows created.
Elapsed: 00:00:00.00
flip@FLOP>
flip@FLOP> insert into cars
2 select 'x'||manufacturer, seats, mileage
3 from cars;
100 rows created.
Elapsed: 00:00:00.00
flip@FLOP>
flip@FLOP> commit;
Commit complete.
Elapsed: 00:00:00.00
flip@FLOP>
flip@FLOP> CREATE OR REPLACE FORCE VIEW CARS_VIEW
2 (MANUFACTURER,SEATS, MILEAGE, LVL, RNM, SKLN_FLG)
3 AS
4 select
5 MANUFACTURER,
6 seats,
7 mileage,
8 level lvl,
9 rownum rnm,
10 (
11 case when level=rownum then 1
12 else 0
13 end
14 ) skln_flg
15 from
16 (
17 select
18 MANUFACTURER,
19 seats,
20 mileage
21 from cars
22 order by mileage desc
23 )
24 connect by prior seats < seats ;
View created.
Elapsed: 00:00:00.00
flip@FLOP>
flip@FLOP> select
2 c.* from cars c,
3 (
4 select mileage,seats from cars_view where rownum <
5 (
6 select rnm from cars_view
7 where skln_flg=0
8 and rownum < 2
9 )
10 )sbq
11 where c.mileage = sbq.mileage
12 and c.seats = sbq.seats
13 ;
MANUFACTUR SEATS MILEAGE
9849 1 49
x9849 1 49
9249 3 49
x9249 3 49
x22724 3 49
22724 3 49
14424 5 49
x14424 5 49
15174 6 49
x15174 6 49
7349 7 49
x7349 7 49
12 rows selected.
Elapsed: 00:00:00.00
flip@FLOP>
flip@FLOP> select * from Cars c
2 where not exists (
3 select * from Cars cc
4 where cc.seats >= c.seats and cc.mileage > c.mileage
5 or cc.seats > c.seats and cc.mileage >= c.mileage
6 );
MANUFACTUR SEATS MILEAGE
7349 7 49
x7349 7 49
Elapsed: 00:00:00.00
flip@FLOP> -
MAX(SummaryNum) +1 bad idea, but how to use sequence part composite column
Hi,
My relational mode is as follows
Policy (policynum PK) has 1:M with Summary (policynum FK, SummaryNum part of PK, other columns part of PK)
Basically for each policy users can enter notes with SummaryNum 1, 2, 3, 4.... These numbers are shown to the user for tracking purpose. I need to make sure summary notes for EACH policy start with 1 (cannot really use sequence in the table in the strictest sense) and are incremented by 1. The current Oracle form basically creates the next highest possible value of SummaryNum by adding one to the currently available highest value. In brief, it is like a sequence number for summaries of a particular policy in the summary table.
PRE-INSERT
SELECT MAX(SummaryNum ) + 1
FROM Summary
I am trying to replicate this in ADF BC (using 11g) and know that not using sequencing and adding one to get the next number is a very bad idea due to concurrency challenges (transactional ACID properties). The reasons are as follows.
• Using MAX(policy_memo_seq_num) + 1 is not scalable,
• It will lead to duplicates in a multi-user environment, whether ADF BC, Oracle Forms, or any other technology
I also know how to use create a sequence in db, a related trigger, and then set the attribute in EO properties as DBSequence. My challenge is that since SummaryNum is not a primary key, and instead is part of a composite key in my case, how do I make sure that summary notes for EACH policy start with 1 in the Summary Table.
I appears that i cannot really use sequence in the table in the strictest sense as this will mean that for policies the summaryNum will start from the next available sequence number, but what i really want is to have it start by one for all policies.*
I would appreciate any help.
Thanks,Not sure if there is a better way, but here is one way. Let's say your table was like this:
SQL> desc versioned_item
Name Null? Type
ID NOT NULL NUMBER
VERSION NOT NULL NUMBER
DESCRIPTION VARCHAR2(20)and lets say your data looked like this:
SQL> select * from versioned_item order by id, version
ID VERSION DESCRIPTION
1001 1 Item 1001
1001 2 Item 1001
1001 3 Item 1001
1002 1 Item 1002
1002 2 Item 1002
1003 1 Item 1003To select only the rows for the max-version-id, you could do this:
select id, version,description
from versioned_item
where (id,version) in (select id,max(version) from versioned_item group by id)
order by id
ID VERSION DESCRIPTION
1001 3 Item 1001
1002 2 Item 1002
1003 1 Item 1003To capture this as a view object, you'd only just need to paste in the WHERE clause above into the Where clause box of the view object. No need to use expert-mode since you're not changing the select list or from clause. -
Exporting AE composition as Adobe Premiere Pro Project
Hi Adobe support team; me again. Do you know what am I supposed to do? Step by step and very simple vocabulary plz... The question? What am I supposed to do to export an After Effects composition as an Adobe Premiere Pro Project. When I export a layer from the composition it works good, but when I try exporting all the composition, it doesn't work at all. There is obviously something wrong; what it is?
Depends if you are on the cloud or not or the new 12.0 version. One way to do it is to import your AE footage into PP as pending on your puter....
Important to tell your version and OS when posting. What seems clear at your end is being asked from someone who has not seen your challenge. -
I am working on an interactive book and have set up each page as a separate composition in edge.
I am using the edge commons JS library to load multiple compositions into a main composition.
You can see how this works here: Edge Commons - Extension Library for Edge Animate and Edge Reflow | EdgeDocks.com
The way the edge commons tutorial is set up requires a button for each composition i want to load. I am interested in loading multiple compositions with a "next" and "back" button, and a "swipe left, "swipe right" gesture on the content symbol that each composition is loaded into. I also need the swipe features on the content symbol not to interfere with the interactive elements on the loaded composition.
Please suggest a solution that will work without adding additional scripts beyond edge commons and jquery.Sort of. I'm using this code inside an action for a button symbol. But it doesn't work perfectly. Trying to debug it.
Let me know if you have any luck.
//Check to see if pageCounter already exists
if (typeof EC.pageCounter === 'undefined') {
// it doesn't exist so initialize it to first page
EC.pageCounter = 2;
//check if the page is only 1 digit -- patch for single digit
if (EC.pageCounter < 9) {
// it is, so we need to pad a 0 on the front.
EC.pageCounterString = "0" + EC.pageCounter;
//e.g. 01 ...09,11,12,13....115,222352,,....
else {
EC.pageCounterString = EC.pageCounter;
EC.loadComposition(EC.pageCounterString + "/publish/web/" + EC.pageCounterString + ".html", sym.$("container"));
EC.pageCounter = EC.pageCounter + 1;
//TODO for back -1 -
Mini DVI to S-video and composite giving complete distorted output!!
I tried to hook up my 20" Sony Trinitron either through S-video or composite to the powerbook without any success. I got completely distorted picutre. I have tried all the video resolution setting that is available on the Displays reference window with the same result. Does anybody out there run into this problem at all.
I have a 12" PowerBook 1.5ghz and had the same problem with garbled video today. When I plugged in the Mini-DVI > Composite/S-Video adapter the PowerBook would consistently think it was attached to an ordinary VGA monitor, and could not sync with the television. This caused a B&W, warped, split in half picture.
I took the adapter back to the Apple store, plugged it into one of their demo units, and saw it behaved the same way. They gave me a replacement and everything has been fine since.
QS2002, 1.4ghz, 1gb, GeForce 4 Ti4600, SATA; 12" PB 1.5ghz, 512mb Mac OS X (10.4.4) -
Mini dvi to s-video/composite adapter doesn't seem to work?
basically i'm trying to hook my rather new 2.53 ghz mac mini to my tv, which basically has rca inputs. so, having seen several laptops hooked to this tv as simply as using an s-video to composite adapter, i figured it would be as simple as getting the mini dvi to composite adapter and i'd be good. well, for some reason this doesn't seem to do anything. the other weird thing is that when i hook the cable from the composite adapter to the tv, i get a buzz out of the speakers as the connection is being made. why is a video signal having anything to do with the audio? plugging my playstation 2 into the same input works fine. why the difference?
Boece wrote:
!http://images2.monoprice.com/productmediumimages/47241.jpg!
+
!http://images2.monoprice.com/productmediumimages/48501.jpg!
That's the setup I've used. Works great for video and photos, but webpage text can be difficult to read.
I used the yellow composite input rather than the s-video. My old tv is inside an “entertainment center” type tv stand and is so friggin heavy, it’s a pain in the axx to move, so I just used the composite plug on the front of my tv. Since the Mac mini is sitting in front of the tv it works great:-)
http://discussions.apple.com/thread.jspa?threadID=2430645&tstart=0
Message was edited by: James Press1 -
Does DVI to Video adapter send both s-video and composite simoutanesly?
Hi,
I´m wondering if you connect the DVI to Video adapter on the new MacMini (DVI to s-video+composite), does the Mini send BOTH s-video and composite at the same time?
If connected to TV and projector, I want to see both simotanesly, not neede to change something in the setup...
The both having same resolution I guess is a must!
Regards,
PatYes, many think, but i´ve havnt heard anyone whoose sure?! Can´t be such a odd thing, connect a Mini to two displays at same same time...
Maybe you are looking for
-
I'll be damned if I can figure this out. I have a new late-2012 iMac (OS 10.8.3) connecting to my LAN via Airport wi-fi. (It will be Ethernet-connected, as soon as a hole is drilled into my new work table, but right now the Ethernet cable doesn't r
-
Custom spry type menu using images instead of text
Hi. Before I ask my question properly, I just want to put it out there that I have created my almost finished (apart from this part) website pretty much cometely in design view. I am very very limited with HTML and have zero scripting experience. I'm
-
Can't override artist name when sorting by artist
iTunes has selected about half my library of Nine Inch Nails (hasn't been updated or shifted by me for years) and is displaying the artist name as "seasonsinthesky", which I've never heard of - but only when I'm browsing by artist (i.e. all data appe
-
Copying WBS from Header to Item Level
Hi Everyone Is there a way in which we can copy the WBS assigned at header level to item level? Do i need to do any extra configuration for this? Your kind help will be really apprecited. Kind Regards
-
Anyone having experience upgrade from 4.3.3 to 6.1
Query about the iphone will slow down a lot. Anyone can give me some advice?