LR roundtrip to PS different?
Hello everybody,
I work a lot in Lightroom 3, and sometimes I roundtrip (take an image) to Photoshop CS5 and then back.
When in lightroom I select Photo > Edit in > Photoshop.
Then I used to get a dialog asking me to either edit the original; the original with Lightroom adjustments or a copy with Lightroom adjustments.
Since a couple of days I don't get that dialog anymore instead it goes staight to PS. I can work in PS, then save, and then I used to get back to LR. Now I have to hide (or close for that matter) PS in order to get LR up front.
Then I see the original (DNG or CR2) stacked with the new PSD, so the end result is actually wat I had chosen given the chance.
But all in all what happened to the dialog and to the automatic return to LR?
It's on both my Macs (MBP and iMac, both running Lion 10.7.2). The change seems to have taken place since the upgrade of LR to 3.5 with Camera Raw 6.5, but I can be wrong.
Cheers,
Rob
Hi Rob,
Not sure I fully understand this behaviour, but have a look at:
http://forums.adobe.com/message/3964478#3964478
I think it's the same thing you're describing.
HTH
David
Similar Messages
-
I can't roundtrip from Aperture 3.3.2 to Photoshop!
Hi,
I am having significant problems with Aperture and Photoshop interaction. As a working professional photographer it is a serious issue that is causing me real stress and it is commercially important that I rectify it quickly.
After extensive testing I am unsure as to the root cause of this problem, different things point towards different causes, however at this stage I am unsure as to wether this is being caused by an Aperture software issue, a Photoshop software issue or indeed a hardware problem.
Mindful of this, I am sending this email to Apple, Adobe, the NAPP help desk, Aperture Expert forum in the hope that somebody can resolve this issue.
THE PROBLEM
When I try and roundtrip images from Aperture v3.3.2 to Photoshop CS6, Aperture prepares the files as tiffs (8 bit) as per the export settings in my preferences dialog box. I can see them being duplicated in the Aperture window but when Photoshop opens only 1 image is available, the others whilst sitting in Aperture are not shown in Photoshop CS6?
I have run the same round tripping process with images sent from Lightroom 4 and they DO all appear in Photoshop? This does point towards an Aperture problem rather that Photoshop.
I have tried the same process using Photoshop CS5 from Aperture 3.3.2 and the situation is the same.
Thinking that it could be a hardware issue or old preference files etc, I did a completely clean install of OSX Mountain Lion, Aperture 3.3.2 and Photoshop CS6 and still all is the same. Additionally I tested the problem on different hardware (MacBook Air) and the problem is replicated there?
There was a time in recent months before the introduction of Aperture 3.3.2 and Mountain Lion that this problem did not occur and the process did work on both bits of hardware that they are being produced on now...
I have trawled painfully through all my preferences in Aperture and Photoshop to see if this is a simple setting issue but to date cannot identify one.
I have posted the problem on the web in various forums and can only find a very small handful of people having this issue, it doesn't seem to be widespread.
Please can you help me to rectify this significant issue. I am a professional trying to get work done and this problem is increasing my workflow exponentially.
Best regards
RichardAnother user changed Aperture to read Aperture 3. Of course if did not update as the app needs to be named simply Aperture. BUT, when he changed it back to Aperture, he was able to update.
Maybe there is an invisible character in there like a space or something.
If that doesn't work I would contact the App Store Support. -
Hello,
I find the ACR adjustments in Bridge and LR to be much more intuitive than Aperture. But, I don't want to switch to LR since Aperture (to me) wins hands-down as an overall better application.
Until I really get a good handle on working with Aperture adjustments, I was thinking of a different workflow option for images that I need to externally edit in PS3. Basically, I was hoping to be able to roundtrip from Aperture to PS3, then open ACR to do the preliminary adjustments before the PS3 edits. It looks to me that this isn't possible - or maybe I'm just overlooking something.
Does anyone know if this is possible - or perhaps have another approach?
Thanks in advance,
DaveI've designated Bridge as my external image editor and specified TIFF for the output format. When Bridge opens as the external editor, I open the TIFF version in ACR, edit, then open in CS3. After tweaking in CS3 with full use of my third-party plugins I save as TIFF, overwriting the original Aperture TIFF version. Back in Aperture, Viewer shows the changes made in ACR and CS3.
PSD files cannot be opened by ACR, so TIFF is the only format for roundtripping to ACR that doesn't have the possibility of compression artifacts.
Note that Aperture does the RAW conversion and ACR is used as an image processor for TIFF files, then CS3 is used for refinements to the ACR adjustments. ACR's RAW conversion is bypassed. Also note that the TIFF files are large, so there are no savings in storage space from all-Aperture processing. All versions are managed through Aperture, though.
One inconvenience: even though I select multiple images to open with Bridge as external editor, only one file appears in Bridge so I can't do batch processing. If anyone figures out a workaround, please let us know! -
AUTOTRACE stats different in SQL Developer vs SQL Plus
Using SQL Developer 3.2.20.09, database is 11.2 on Linux. I'd like to know why I get different stats when using AUTOTRACE in SQL Developer vs SQL Plus. If I do the following in SQL Developer and in SQL Plus I get a very different set of stats:
set autotrace on
select * from emp;
set autotrace off
I run this with "run worksheet" in SQL Developer and I see this set of stats:
Statistics
4 user calls
0 physical read total multi block requests
0 physical read total bytes
0 cell physical IO interconnect bytes
0 commit cleanout failures: block lost
0 IMU commits
0 IMU Flushes
0 IMU contention
0 IMU bind flushes
0 IMU mbu flush
I do the same in SQL Plus and I see these stats:
Statistics
0 recursive calls
0 db block gets
7 consistent gets
0 physical reads
0 redo size
1722 bytes sent via SQL*Net to client
519 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed
A very different set of stats. The stats in shown in SQL Plus seem to be much more useful to performance optimization over the one shown in SQL Developer. Why the different set of stats for each? Thanks.Cross-referencing to another recent thread on this topic:
Re: set autotrace on statistics in sqldeveloper (oracle 11g) is wrong -
Multiple Queries on Same/Different Table?
Hello all,
I have started to learn how to integrate java with databases using JDBC. However I have a question that would make my life much easier. From what I've learned everytime I want to do a query, I have to create a new statement and new result set to perform the query on same/different tables. This seems rather redundant and inefficient. Is there a way to go around this? Suppose I wanted to SELECT * FROM A and then perform SELECT name FROM A WHERE <some condition> and then output both of them. Is there any way I can just build upon my old result set? Or must I create a whole new result set and connected statement everytime?
Edited by: dnguyen1022 on Oct 9, 2009 11:16 PMdnguyen1022 wrote:
Hello all,
I have started to learn how to integrate java with databases using JDBC. However I have a question that would make my life much easier. From what I've learned everytime I want to do a query, I have to create a new statement and new result set to perform the query on same/different tables. That isn't entirely true but normally that is what you do.
This seems rather redundant and inefficient. Compared to what? The roundtrip to the database and the work that the database does to get the data? Then no it is very efficient.
Is there a way to go around this? Suppose I wanted to SELECT * FROM A and then perform SELECT name FROM A WHERE <some condition> and then output both of them. A JDBC query can consist of more than one result set.
Note that if if you have real performance problem (measured production load) then alternative designs are much more likely to lead to substantial savings versus fiddling with actual queries. If you haven't actually profile the app then note that focusing on the architecture/design rather than implementation of any application will lead to far more performance benefits. -
How to avoid roundtrip to server on value select in dropdown box in ALV cel
Dear Experts,
I have WD ALV table where user edits values in the cels using dropdown by key element.
Then after edit he saves whole table. When value is changed in dropdown box the roundtrip to server occurs and it takes noticable time for every cell. My dropdown lists are fixed and are the same for all rows. It is not pleasant user experience and is annoying for user.
Is there a way to disable this roundtrip for each value selection for each cell and transfer all the data for whole ALV table only when I save whole table (separate button "Save")?
Help very much appreciated.
DmitryA lot of enterprises, however, actively avoid systems which are locked down to a particular server for very legitimate reasons. If my data center dies in the middle of the night, I sure don't want to have to call your mobile phone so that you can get to a computer, log in to the office network, and get me a new key so that I can finish my emergency failover. If I've got dozens of applications, I absolutely don't want to do that with dozens of different vendors.
It sounds like your problem, though, isn't that users are installing your software on multiple computers it's that they are accessing functionality they haven't licensed. That is generally a much easier problem to solve and doesn't require you to lock anything down to a particular machine. You can create a table LICENSED_CONTENT, for example,
CREATE TABLE licensed_content (
client_id NUMBER,
content_type VARCHAR2(30),
key RAW(128)
)In this case, KEY is, say, a hash (using the DBMS_CRYPTO or DBMS_OBFUSCATION_TOOLKIT packages if you'd like) of the client_id, content_type, and a bit of salt (i.e. a fixed string that only you know). When you sell a license to manage diamond content, you provide a script that inserts the appropriate row in the LICENSED_CONTENT table. When your application starts up, it reads the LICENSED_CONTENT table and verifies the hash before allowing users to access that type of content. This allows legitimate customers to move the software from one system to another but prevents them from accessing new functionality without a new license.
Justin -
JSF 1.2 - Roundtrip just to create a simple Go Back link scenario
From what I've read, it seems JSF 1.2 treats everything as HTTP POST. Hence, doing a simple Go Back means one has to do a round trip to the backend database and back. Here's how I've done it:
1. Create a form and populate it with a list of information. For each line of info in the list, I define a h:outputLink that have four f:param directives to pass the parameters I want to a second form. Of course, the name/id of these params have to be unique. Say, if you want to create a second set of h:outputLink for the list, you have to define new unique name/id. Yuck! Why must name/id be unique? After all, they just going to become URL parameters.
2. To create a "Go Back" link on my second form, I define a h:outputLink with the same number of f:params deriving their values from the URL parameters received from the first form. I have tried the h:commandLink/h:commandButton without success - they just resubmitted the second form without redirecting back to the first form.
3. Once the "Go Back" link is clicked, I have to do a roundtrip to the backend database just to "refresh" the first form - and if I have multiple backing beans providing data for this form - this roundtrip can become quite expensive.
What I gather from doing the JSF for the last five weeks:
1. All requests are submitted as HTTP POSTs - this much is obvious. Thus, this will break the ability to do "light" navigation (such as a Go Back link), bookmarking and so forth due to the POST nature of JSF 1.2 or earlier. JSF 2.x might be different though - I may have to check the spec to confirm this. I have heard that one may need to use Facelets and not JSP as the view technology - otherwise certain features of this implementation will not be available.
2. To pass information as parameters means one has to validate URL parameters to prevent URL tampering from third parties.
3. I can use session scope for the form - but this is an "ugly" way of doing it i.e. If I move to a different area of the website and come back to the form, I can still see the previously generated data - until the session expired of course - then the form is reset to it's pre-submit status. I can appreciate the use of session scope in situation where one want to keep certain information active e.g. when a user logged into a forum or a secured session (HTTPS). To use session scope to just navigate between forms (while maintaining the status of the displayed data) is too much of an overkill.
If there are better ways of doing something trivial like a Go Back link or bookmarking then I'm all open ears. If not, I might have to ditch JSF 1.2 and investigate JSF 2.0. Hopefully, this release would be less painful.
Looking forward to JSF experts' comments surrounding this issue. I'm sure I'm not the first to complain about this framework and I won't be the last. Enough rant...
Edited by: icepax on 26/11/2009 11:32
Edited by: icepax on 29/11/2009 05:45BalusC wrote:
First, outputlinks do not generate POST.Appreciate your reply. I stand corrected with this comment. But reading a few articles, for example: [JSF Discussions and Blogs|http://ptrthomas.wordpress.com/2009/05/15/jsf-sucks/], one has to wonder how JSF 1.x stacks up.
>
Second, how would you do it all in plain HTML/JSP/Servlet without JSF? If you tell how you would do it, then we can suggest the JSF way of it (but with the time you should probably already have realized that your concern/rant makes no sense).Sorry, Bauke, I went from Java straight to JSF for the past five weeks but if I have to guess one would use HttpServletRequest or HttpServletResponse in HTML-JSP-Servlet scenario. What I have done with the roundtrip actually works for me albeit I'm hitting the backend twice instead of once (there are pros and cons of doing it this way but I won't go into detail here). However, this still begs the question about doing something trivial like a Go Back link or bookmarking and why it's not so trivial in JSF.
>
At any way, you may find JBoss Seam interesting. It has an extra scope between request and session in: the conversation scope.Thanks. I could try it or move straight to JSF 2.0 or later.
Don't get me wrong, I love JSF. But I won't see its real potential until I move to JSF 2.x or later. Integrated Facelets support is something I'm looking forward to -:) -
Premiere Markers export and import: Roundtrip via FCP XML not working
The objective of my project is to create a sports app that records markers and saves it in different formats of XML / CSV that I can then import in different NLE packages.
I have already succeeded with Vegas Pro but also want to support NLEs for MAC-OS.
In order to approach this analytically I have exported a very simple project with 1 sequence including 1 clip and 3 markers (with length zero and length > 0) and exported it
in FCP XML from Adobe Premiere CC and do a round-trip import it via FCP import into Adobe Premiere CC.
This round-trip does not work. Any insight why or any support what other mechanism will work?
Thanks
Thomaslet's go back to the purpose:
I have written a mobile app on which people can during a sports match capture the time of the important moments by pressing buttons.
I then convert the time information into an XML or CSV file that can be read by the different NLE. Given that Sony Vegas allows to import
markers independently from a sequence. Now I look at the same things for Premiere. This allows to edit the highlights very quickly vs watching
the entire match coverage again.
I try to replicate that on Premiere through the FCP XML, but now that the roundtrip works, the issue is that FCP XML in its structure wants a sequence.
I could imagine to provide a dummy sequence that can be overlayed by the real content just to preserve the markers. Any ideas?
Regards,
TK -
Different 'execution plans' for same sql in 10R2
DB=10.2.0.5
OS=RHEL 3
Im not sure of this, but seeing different plans for same SQL.
select sql_text from v$sqlarea where sql_id='92mb4z83fg4st'; <---TOP SQL from AWR
SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","LOGINSUCCESSFLG",
"ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE"
FROM "BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY";
SQL> set autotrace traceonly
SQL> SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","LOGINSUCCESSFLG",
"ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE"
FROM "BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY"; 2 3
1822203 rows selected.
Execution Plan
Plan hash value: 568996432
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1803K| 75M| 2919 (2)| 00:00:36 |
| 1 | TABLE ACCESS FULL| ENDUSERLOGINATTEMPTHISTORY | 1803K| 75M| 2919 (2)| 00:00:36 |
Statistics
0 recursive calls
0 db block gets
133793 consistent gets
0 physical reads
0 redo size
76637183 bytes sent via SQL*Net to client
1336772 bytes received via SQL*Net from client
121482 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1822203 rows processed
===================================== another plan ===============
SQL> select * from TABLE(dbms_xplan.display_awr('92mb4z83fg4st'));
15 rows selected.
Execution Plan
Plan hash value: 3015018810
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | COLLECTION ITERATOR PICKLER FETCH| DISPLAY_AWR |
Note
- rule based optimizer used (consider using cbo)
Statistics
24 recursive calls
24 db block gets
49 consistent gets
0 physical reads
0 redo size
1529 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
15 rows processed
=========second one shows only 15 rows...
Which one is correct ?Understood, second plan is for self 'dbms_xplan'.
Anyhow I opened a new session where I did NOT on 'auto-trace'. but plan is somewhat than the original.
SQL> /
PLAN_TABLE_OUTPUT
SQL_ID 92mb4z83fg4st
SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","
LOGINSUCCESSFLG","ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE" FROM
"BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY"
Plan hash value: 568996432
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | | | 2919 (100)| |
| 1 | TABLE ACCESS FULL| ENDUSERLOGINATTEMPTHISTORY | 1803K| 75M| 2919 (2)| 00:00:36 |
15 rows selected.
I am just wondering, which plan is the accurate and which I need to believe ? -
One query taking different time to execute on different environments
I am working on Oracle 10g. We have setup of two different environments - Development and Alpha.
I have written a query which gets some records from a table. This table contains around 1,000,000 records on both the environments.
This query takes 5 seconds to execute on Development environment to get 200 records but the same query takes around 50 seconds to execute on Alpha environment and to retrieve same number of records.
Data and indexes on the table is same on both environments. There are no joins in the query.
Please let me know what are the all possible reasons for this?
Edited by: 956610 on Sep 3, 2012 2:37 AMBelow is the trace on the two environments ---
-----------------------Development ------------------------------
CPU used by this session 1741
CPU used when call started 1741
Cached Commit SCN referenced 15634
DB time 1752
Effective IO time 7236
Number of read IOs issued 173
SQL*Net roundtrips to/from client 14
buffer is not pinned count 90474
buffer is pinned count 264554
bytes received via SQL*Net from client 4507
bytes sent via SQL*Net to client 28859
calls to get snapshot scn: kcmgss 6
calls to kcmgcs 13
cell physical IO interconnect bytes 165330944
cleanout - number of ktugct calls 5273
cleanouts only - consistent read gets 5273
commit txn count during cleanout 5273
consistent gets 202533
consistent gets - examination 101456
consistent gets direct 19686
consistent gets from cache 182847
consistent gets from cache (fastpath) 81013
enqueue releases 3
enqueue requests 3
execute count 6
file io wait time 1582
immediate (CR) block cleanout applications 5273
index fetch by key 36608
index scans kdiixs1 36582
no buffer to keep pinned count 8
no work - consistent read gets 95791
non-idle wait count 42
non-idle wait time 2
opened cursors cumulative 6
parse count (hard) 1
parse count (total) 6
parse time cpu 1
parse time elapsed 2
physical read IO requests 181
physical read bytes 163299328
physical read total IO requests 181
physical read total bytes 163299328
physical read total multi block requests 162
physical reads 19934
physical reads direct 19934
physical reads direct temporary tablespace 248
physical write IO requests 8
physical write bytes 2031616
physical write total IO requests 8
physical write total bytes 2031616
physical write total multi block requests 8
physical writes 248
physical writes direct 248
physical writes direct temporary tablespace 248
physical writes non checkpoint 248
recursive calls 31
recursive cpu usage 1
rows fetched via callback 23018
session cursor cache hits 4
session logical reads 202533
session uga memory max 65488
sorts (memory) 3
sorts (rows) 19516
sql area evicted 2
table fetch by rowid 140921
table scan blocks gotten 19686
table scan rows gotten 2012896
table scans (direct read) 2
table scans (long tables) 2
user I/O wait time 2
user calls 16
workarea executions - onepass 4
workarea executions - optimal 7
workarea memory allocated 17
------------------------------------------------------ For Alpha ------------------------------------------------------------------
CCursor + sql area evicted 1
CPU used by this session 5763
CPU used when call started 5775
Cached Commit SCN referenced 9264
Commit SCN cached 1
DB time 6999
Effective IO time 4262103
Number of read IOs issued 2155
OS All other sleep time 10397
OS Chars read and written 340383180
OS Involuntary context switches 18766
OS Other system trap CPU time 27
OS Output blocks 12445
OS Process stack size 24576
OS System call CPU time 223
OS System calls 20542
OS User level CPU time 5526
OS User lock wait sleep time 86045
OS Voluntary context switches 15739
OS Wait-cpu (latency) time 273
SQL*Net roundtrips to/from client 14
buffer is not pinned count 2111
buffer is pinned count 334
bytes received via SQL*Net from client 4486
bytes sent via SQL*Net to client 28989
calls to get snapshot scn: kcmgss 510
calls to kcmgas 4
calls to kcmgcs 119
cell physical IO interconnect bytes 340041728
cleanout - number of ktugct calls 1
cleanouts only - consistent read gets 1
cluster key scan block gets 179
cluster key scans 168
commit txn count during cleanout 1
consistent gets 41298
consistent gets - examination 722
consistent gets direct 30509
consistent gets from cache 10789
consistent gets from cache (fastpath) 9038
cursor authentications 2
db block gets 7
db block gets from cache 7
dirty buffers inspected 1
enqueue releases 58
enqueue requests 58
execute count 510
file io wait time 6841235
free buffer inspected 8772
free buffer requested 8499
hot buffers moved to head of LRU 27
immediate (CR) block cleanout applications 1
index fast full scans (full) 1
index fetch by key 196
index scans kdiixs1 331
no work - consistent read gets 40450
non-idle wait count 1524
non-idle wait time 1208
opened cursors cumulative 511
parse count (hard) 39
parse count (total) 44
parse time cpu 78
parse time elapsed 343
physical read IO requests 3293
physical read bytes 329277440
physical read total IO requests 3293
physical read total bytes 329277440
physical read total multi block requests 1951
physical reads 40195
physical reads cache 8498
physical reads cache prefetch 7467
physical reads direct 31697
physical reads direct temporary tablespace 1188
physical write IO requests 126
physical write bytes 10764288
physical write total IO requests 126
physical write total bytes 10764288
physical writes 1314
physical writes direct 1314
physical writes direct temporary tablespace 1314
physical writes non checkpoint 1314
prefetched blocks aged out before use 183
recursive calls 1329
recursive cpu usage 76
rows fetched via callback 7
session cursor cache count 8
session cursor cache hits 491
session logical reads 41305
session pga memory max 851968
session uga memory -660696
session uga memory max 3315160
shared hash latch upgrades - no wait 14
sorts (disk) 1
sorts (memory) 177
sorts (rows) 21371
sql area evicted 10
table fetch by rowid 613
table scan blocks gotten 30859
table scan rows gotten 3738599
table scans (direct read) 4
table scans (long tables) 8
table scans (short tables) 3
user I/O wait time 1208
user calls 16
workarea executions - onepass 7
workarea executions - optimal 113
workarea memory allocated -617 -
View running slowly on different database
Hi,
I have an issue where a view which runs quickly on two different databases runs slowly on a 3rd database. The database schema is the same on all 3 machines. The two databases that run the view quickly are v.9.2.0.1.0, and v.10.2.0.2.0, the 3rd database that runs the view slowly is v.9.2.0.7.0. The physical machine that the 3rd database runs on is the fastest machine.
On the poorly performing database, if I set autotrace on, then I get this:
Statistics
36 recursive calls
0 db block gets
618169 consistent gets
10 physical reads
0 redo size
406 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
obviously, the sheer number of consistent get's has to be the source of the problem.
explain plan doesn't reveal anything obvious.
I'm struggling to resolve the issue. Any help or pointers would be appreciated.The indexes are the same. I've also tried rebuilding the indexes.
Auto traces for the other 2 boxes
Statistics
0 recursive calls
0 db block gets
221 consistent gets
0 physical reads
0 redo size
379 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Statistics
0 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
255 bytes sent via SQL*Net to client
369 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed -
I cant roundtrip from Aperture into Photoshop
Hi,
I am having significant problems with Aperture and Photoshop interaction. As a working professional photographer it is a serious issue that is causing me real stress and it is commercially important that I rectify it quickly.
After extensive testing I am unsure as to the root cause of this problem, different things point towards different causes, however at this stage I am unsure as to wether this is being caused by an Aperture software issue, a Photoshop software issue or indeed a hardware problem.
Mindful of this, I am sending this email to Apple, Adobe, the NAPP help desk, Aperture Expert forum in the hope that somebody can resolve this issue.
THE PROBLEM
When I try and roundtrip images from Aperture v3.3.2 to Photoshop CS6, Aperture prepares the files as tiffs (8 bit) as per the export settings in my preferences dialog box. I can see them being duplicated in the Aperture window but when Photoshop opens only 1 image is available, the others whilst sitting in Aperture are not shown in Photoshop CS6?
I have run the same round tripping process with images sent from Lightroom 4 and they DO all appear in Photoshop? This does point towards an Aperture problem rather that Photoshop.
I have tried the same process using Photoshop CS5 from Aperture 3.3.2 and the situation is the same.
Thinking that it could be a hardware issue or old preference files etc, I did a completely clean install of OSX Mountain Lion, Aperture 3.3.2 and Photoshop CS6 and still all is the same. Additionally I tested the problem on different hardware (MacBook Air) and the problem is replicated there?
There was a time in recent months before the introduction of Aperture 3.3.2 and Mountain Lion that this problem did not occur and the process did work on both bits of hardware that they are being produced on now...
I have trawled painfully through all my preferences in Aperture and Photoshop to see if this is a simple setting issue but to date cannot identify one.
I have posted the problem on the web in various forums and can only find a very small handful of people having this issue, it doesn't seem to be widespread.
Please can you help me to rectify this significant issue. I am a professional trying to get work done and this problem is increasing my workflow exponentially.
Best regards
Richard>> but when Photoshop opens only 1 image is available,
That means that Aperture only sent 1 image file to Photoshop.
Photoshop accepts multiple files via command line arguments or Apple Events (or COM on Windows).
If there is a setting, it would have to be in Aperture. -
Roundtrip between FCP & Motion not working
Hi, I have just upgraded to Final Cut Studio 2 with Motion 3. Wanted to try the roundtrip integration between FCP & Motion. When I send to motion, my clips are not recognized in motion, I have to physically reconnect the media in the media tab. I thought it is suppose to be automatic.
Do anyone encounter the same problem as me?let's go back to the purpose:
I have written a mobile app on which people can during a sports match capture the time of the important moments by pressing buttons.
I then convert the time information into an XML or CSV file that can be read by the different NLE. Given that Sony Vegas allows to import
markers independently from a sequence. Now I look at the same things for Premiere. This allows to edit the highlights very quickly vs watching
the entire match coverage again.
I try to replicate that on Premiere through the FCP XML, but now that the roundtrip works, the issue is that FCP XML in its structure wants a sequence.
I could imagine to provide a dummy sequence that can be overlayed by the real content just to preserve the markers. Any ideas?
Regards,
TK -
Please read this first. THIS IS NOT A GAMMA ISSUE! I am well aware of the difference between RGB v YUV & 0-255 v 16-235. It is not a RGB v YUV problem or a 0-255 v 16-235.
I am experiencing a noticeable color shift within highly saturated files. I am specifically noticing a difference in REDs. At least witin my current project.
Files are Alexa C-Log. Edited within FCP. Graded by BaseLight within FCP. Comped within NUKE. Titles added within AE. Sent back to FCP timeline.
Here is what I cannot explain. I can roundtrip the file within Nuke. I can Read a ProRes4444 file which I can then render out as multiple formats, i.e. QT ProRes4444, EXR, TIFF, DPX. I can bring those files back in and create a difference matte which produces pure black (well close enough, small differences in gamma when elevated astronomically). So apples to apples, color space matches, gamma matches.
Now when I bring these rendered files into AE the image sequences display a different color than the ProRes4444 QTs. ALL the image sequences match and ALL the ProRes4444 match. I have tried creating a project with an unmanaged color space as well as a traditional rec709 project color space. I have also tried intrepreting the PR4444 files different color profiles. NO amount of variation will allow the PR4444 files to match the image sequences. I am partial to image sequences, BUT, ProRes files have become standard file types and I would like to resolve this issue.
Any thoughts?
I have tried this:
http://blogs.adobe.com/aftereffects/2009/12/prores-4444-colors-and-gamma-s.html
and this:
http://blogs.adobe.com/aftereffects/2010/05/prores-4444-and-prores-422-in.html
Please observe . . . .
TIA
-MI have more results after further testing.
Running OS X 10.8.4
AE 10.5.0.253
K5000 GPU
My AE project is setup as a 16bpc, HDTV (rec709), Blend Colors Using 1.0 Gamma, Match Legacy After Effects QuickTime Gamma Adjustments, Compensate for Scene-referred Profiles.
I import a FCP 7 generated QT ProRes4444 file. The file is rendered as YUV Full Precision with Super-Whites. In an attempt to round trip the file, I rendered the singular file with no effects back out as a ProRes4444 file with rec709, sRGB or Preserve RGB color profiles. Once imported back into AE, each of these 3 files matches the original identically. Quite well actually, even when I turn on difference mode and crank the gamma, they hold up really well. Nothing perceptually different here.
NOW, when I render out as DPX, EXR or TIFF, I get a completely different result. EACH image sequence imports identically, so DPX matches EXR matches TIFF. BUT they are all identically different than the original. Once again there is a color shift. NOT a gamma shift, a HUE shift.
I am rendering each file as a rec709 and importing as a rec709.
What gives?
Here is the kicker, when I switch to my windows box (WIN7 SP1), then open my AEP file. The colors match identically across all files & formats!!! I literally set my OS X AE comp to solo 2 layers on difference mode. There is a clear difference. When I open the same file within WIN7, pure BLACK. NO DIFFERENCE. Holy crap this is alarming. Side note, still have the hue shift within NUKE across platforms but it is consistent.
WIN7 64bit SP1
AE 10.5.0.253
GTX680 GPU
I am not going to blame Adobe, since I am almost certain this is an APPLE issue. I would like to ask the Adobe AE Team, what value is AE reading that Nuke is not? Is there different QT importers?
Thanks,
Matt -
Having trouble "roundtripping" between FCP 6 and Soundtrack Pro 2
I've been a FCP user since it first came out, and believe it or not, this is my first attempt at roundtripping with Soundtrack Pro. Basically the process goes pretty good until it's time to go back to FCP.
Here's my workflow: In FCP I highlight the audio clip in the timeline. I select Send To > Soundtrack Pro Audio File Project. It creates a (sent) file, which I save (Save project with latest clip metadata is selected). Once the clip opens in Soundtrack Pro, I make the adjustments I need to make (in this case, I'm getting rid of some background noise), then I save (Include Source Audio is selected).
Here's where the problem comes in. When I go back to FCP, I get the "Some files went offline" warning. When I try to reconnect, I get the following warning:
"File Attribute Mismatch
Some attributes of one or more of the files you have chosen do not match the attributes of the original. This may cause problems within the sequences that are dependent on them. The attributes that differed are as follows:
- Media Start and End
Would you like to try to connect them again?"
When I go to the folder where the modified (sent).stap is saved, it opens and plays back fine. However, it looks like Soundtrack Pro is saving the full clip (from the original footage), not just the selection from my timeline. I'm assuming this is why I'm getting the "File Mismatch" error (basically it's too long).
Is there a step I'm missing? Any help would be greatly appreciated!First thing to try is resetting your fcp preferences
https://discussions.apple.com/docs/DOC-2491
It's also possible your sequence has gotten corrupted. Try editing a short clip into a new sequence and see if that works.
And what are your sequence settings?
Also, although its a long shot, make sure the frame rate of your easy setup matches the frame rate of your sequence.
Maybe you are looking for
-
Hi, Is there a way to link a PO with its invoice details? Thanks
-
How can I remove old Time Machine back-ups for a Mac I no longer have??
So, i am trying to back-up a brand new MBPro to my Time Machine but it says there's not enough room. Only 13.5 GB avail and need 284 GB. Is there a way to go into Time Machine and delete all the old back-ups from my former MBPro? (I've already migrat
-
Updating - the best of both worlds
Hi, Ive had a look through a lot of the topics in this forum, though nobody seems to have wanted to do exactly the same as me. My application has quite a few lengthy operations that are perfomed as part of an actionPerformed event. Im focusing on one
-
Won't sync or update, only music with a lot of luck
I just bought a new ipod 8 gb and I am working on windows xp. When I connect my ipod to Itunes it will sync, but then Itunes reject the ipod and it's impoissible to update postcasts, update/reset the ipod from Itunes or anything else. And I saw that
-
How to restrict the users to make sorting on Field in query analyzer(Excel)
i want to know that is it possible to put restiction for user to make sorting for posting date or any other Field in Query analyzer(Ms Excel ) after excuting the queryy ? i want restiction on sorting option of Ms excel . pls reply me soon.