How to deal with variable length data struct in C/JNI
I have another JNI related question. How do you handle variable length
data structures in Java and pointer to a pointer?
Basically, I have a backend in C which has records but we don't know
how many. The API looks like
typedef struct rec_list_s {
int rec_list_cnt;
rec_list_data_t rec_list_data[1];
} rec_list_t;
int rec_list_show(void handle, rec_list_t *list_ptr);
/* Code snippet for rec_list_show */
int rec_list_show(void handle, rec_list_t *list_ptr)
rec_list_t *ptr;
sz = sizeof (rec_list_t) +
((record_count - 1) * sizeof (rec_list_data_t));
ptr = malloc(sz);
/* fill the data */
*list_ptr = ptr;
return (0);
So I need to wrap rec_list_show() in JNI call so I can have Java call
it. How do i pass a pointer to a pointer from Java? I tried in the
native C code for JNI to return the pointer to pointer as a result
and store in a member in the Java class rec_list_t and then I pass
that to JNI call for rec_list_show. The C backend code was fine
since it got the pointer to pointer but Java become unhappy when
the object it was referencing changed memory location (I suspect
the garbage collection becomes unhappy).
So what would be a good way to deal with this kind of code?
Thanks,
Sunay
Edited by: st9 on Aug 30, 2010 5:47 PM
I did not imply that you don't know C but you are implying that I don't understand C. Perhaps
google Sunay Tripathi and click I am feeling lucky so that we don't get into teaching C
discussions :) On the other hand, I am definitely looking for someone to teach me Java
otherwise I wouldn't be asking.
Anyway, let me explain again. The sample function rec_list_show() runs on the backend. It
is a different process with a different VM space. It of course knows the size of the array
and what to fill in. As a caller to that API (which is a separate process), I don't know
what that size is but I need to get the size and corresponding data in one shot because
the backend locks the table when its providing me the info to make sure its synchronous.
Now I (the Java process) needs to get that count and data in one shot. Since the C library
underneath me (wrapped around my JNI interface) has private IPC mechanism to copy
the contiguous memory from the backend into my memory space, all I need is to provide
a pointer to a pointer which gets filled in by backend and is available to my process. So
my equivalent C frontend just passes a pointer to a pointer and casts the return value in
rec_list_t. The rec_list_cnt tells it how many members it got. The first member is part of
the struct itself but then following members are right after.
Another way to help you understand this is with this code snippet from front end C program
rec_list_t *ptr, *save_ptr;
rec_list_data_t *data_ptr;
int cnt;
save_ptr = ptr = malloc(sizeof(rec_list_t));
rec_list_show(handle, &ptr);
assert(save_ptr != ptr);
cnt = ptr->rec_list_cnt;
for (i = 0; i < cnt; i++) {
data_ptr = &ptr->rec_list_data;
Notice the assert(). Also notice the for loop. How do I expect to walk more that one
member when rec_list_data is a fixed size array of one member?typedef struct rec_list_s {
int rec_list_cnt;
rec_list_data_t rec_list_data[1];
} rec_list_t;
Anyway, I do understand that Java will not allow me to get a reference to a long and
how Java memory management works. But the JNI native implementation is C
and I was wondering if people have managed to do some tricks there between C
and Java.
Similar Messages
-
How do I locate variable length data types with lengths of 2 or less???
How do I locate VARCHAR datatype where the lengths are 2 or less? I want to search the table list on a particular instance.
Correction to the query above:
select * from INFORMATION_SCHEMA.COLUMNS c
where DATA_TYPE = 'varchar'
and c.CHARACTER_MAXIMUM_LENGTH between 1 and 2
We need to exclude rows where it's -1 for varchar(max) columns.
For every expert, there is an equal and opposite expert. - Becker's Law
My blog
My TechNet articles -
I tried numerous ways that some people suggested in this forum but did not succeed in having the LabView Data folder created somewhere else than in My Documents when running a built LabView application. Changing the target destination folder did not help at all. A member suggested to put some line like " defaultdestination = ..." or so into the build file, but I did not know where exactly to put such a line. That folder created is particularly anoying for people who don't normally use LabView but are only using the application. Please let me know when you LabView Experts out there have found a way to eliminate or relocate the creation of that folder at application startup.
Thanks a lot.
Tim, VanTim,
use the path constant located in the Functions pallette under 'File I/O->File Constants->Default Data Directory'. It refers either to the default directory (which is (osdatadir)\Labview Data on my WinXP system) OR TO A NON DEFAULT PATH if you have defined one. To define a non-default path, got to the menu 'Tools->Options...' and change the entry under 'Paths->Default Data Directory'. This change will create an entry in the labview.ini file (located where the labview.exe is), e.g. the entry 'DefaultDataFileLocation=d:\temp' is created when I change the default data dir to 'D:\temp'.
Now for a compiled .exe with the name (say) myapp.exe you will find a file myapp.ini in the same location where myapp.exe is. It is usually an emtpy file directly after the application build process. Put the line 'DefaultDataFileLocation=(your path)' in this file.
Users of your compiled app can either
- edit this .ini file to change the default,
- or you can make the menu entry 'Tools->Options...' accessible in the built VI
- or you can provide a self written dialog in your app and modify the myapp.ini programmatically
(This last way is probably not a good one since the user might have to relaunch myapp.exe in order to affect a change of the 'Default Data Directory' path constant)
-Franz
Tc@labView wrote:
I tried numerous ways that some people suggested in this forum but did not succeed in having the LabView Data folder created somewhere else than in My Documents when running a built LabView application. Changing the target destination folder did not help at all. A member suggested to put some line like " defaultdestination = ..." or so into the build file, but I did not know where exactly to put such a line. That folder created is particularly anoying for people who don't normally use LabView but are only using the application. Please let me know when you LabView Experts out there have found a way to eliminate or relocate the creation of that folder at application startup.
Thanks a lot.
Tim, Van -
How to deal with tempfiles named '+DATA' on physical standby databases
Hi gurus,
I have a 3-node RAC with 2 standby databases, all of them 11.2.0.3, on CentOS 5.10 64 bits.
I use Active Data Guard (with DB open in read-only mode, obviously).
My problem is that somehow I have 3 tempfiles named '+DATA' on one standby database, so I can't drop nor rename these files. Is there any way to fix this?
Looks like the tempfiles couldn't be created when the database was opened in read-only mode.
Thanks a lot for your help.
# Primary DB:
col TABLESPACE format a10
col TEMPFILE format a50
select a.name TABLESPACE, b.TS#, b.FILE#, b.name TEMPFILE, b.bytes/1024/1024 MB, b.status from v$tablespace a, v$tempfile b where a.name='TEMP';
TABLESPACE TS# FILE# TEMPFILE MB STATUS
TEMP 3 4 +DATA/racdb/tempfile/temp.382.873892319 8192 ONLINE
TEMP 3 7 +DATA/racdb/tempfile/temp.383.873892341 8192 ONLINE
TEMP 3 8 +DATA/racdb/tempfile/temp.384.873892341 8192 ONLINE
# Standby DB:
col TABLESPACE format a10
col TEMPFILE format a50
select a.name TABLESPACE, b.TS#, b.FILE#, b.name TEMPFILE, b.bytes/1024/1024 MB, b.status from v$tablespace a, v$tempfile b where a.name='TEMP';
TABLESPACE TS# FILE# TEMPFILE MB STATUS
TEMP 3 4 +DATA 0 ONLINE
TEMP 3 7 +DATA 0 ONLINE
TEMP 3 8 +DATA 0 ONLINE
TEMP 3 9 /u02/oradata/RACDBSB1/tempfile/temp_01.dbf 512 ONLINE
RMAN> report schema;
using target database control file instead of recovery catalog
RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name RACDBSB1
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
4 3072 TEMP 8192 +DATA
7 3584 TEMP 8192 +DATA
8 5120 TEMP 8192 +DATA
9 512 TEMP 8192 /u02/oradata/RACDBSB1/tempfile/temp_01.dbf
SQL> show parameter db_create_file_dest
NAME TYPE VALUE
db_create_file_dest string /u02/oradata
Thanks!Hi all,
Regarding Hemant's question, YES, I use that config, and the error I receive is:
SQL> alter database tempfile 4 drop;
alter database tempfile 4 drop
ERROR at line 1:
ORA-01516: archivo log, archivo de datos o archivo temporal "+DATA" inexistente
Segey, yes, I've tried with no luck.
The good news are that today we've resolved the problem, we had to use:
SQL> alter tablespace temp drop tempfile 4;
Tablespace altered.
OMG, what's the difference... alter database tempfile 4, alter tablespace temp drop tempfile 4... they are both the same...!! lol
Thanks guys! -
How to deal with deadlock on wwv_flow_data table when http server times out
There are some threads about a deadlock on the wwv_flow_data table. None of them contain a real explanation for this behaviour. In my case I will try to explain what I think is happening. Maybe it helps somebody who is hitting the same matter.
In my case with APEX 3.2.1 I am navigating from one page to another. Doing this APEX will lock the table wwv_flow_data. As soon as the other page is shown the lock will be released. But now this other page contains a bad performing query (standaard report region). After 5 minutes the http server (modplsql) will time out and present the message "No response from the application server" on the screen. In the meanwhile the query is still running on the database server and the lock stays on the wwv_flow_data table.
Normal user behaviour will be that the user will use the back button to return to the previous page and tries it again to navigate to the other page or
the user will try to refresh the page with the bad performing query.
And voila now you will have a deadlock on the wwv_flow_data table since a second session is trying to do the same thing while the first hasn't finished yet.
How to deal with it?
First of all. Have a good look at the bad performing query. Maybe you can improve it that it will succeed before the http server will timeout.
In my case the 11gr1 optimizer couldn't handle a subquery factoring clause in the best way. After changing it back to a classical inline query the problem was solved.
Secondly you could increase the timeout parameter of the http server. Although this not the best way.
Maybe it would better if APEX in a next version would release the lock on the table wwv_flow_date earlier or do a rollback just before the moment that the http server is timing out.
regards,
Mathieu MeeuwissenHello Shmoove,
I saw your reply here and you probably understand the problems the HTTP 100 response may cause.
I am trying to send image that was taken by getSnapshot. The problem is that the server respond with this HTTP 100 message.
I suspect that the reason that my server doesn't recognize the file that I'm sending from J2me is that the "server to client" response to the 100 message comes after the second message of (see what the TCPIP viewer shows down here):
POST /up01/up02.aspx HTTP/1.1
Content-Type: multipart/form-data; boundary=xxxxyyyyzzz
Connection: Keep-Alive
Content-length: 6294
User-Agent: UNTRUSTED/1.0
Host: szekely.dnsalias.com:80
Transfer-Encoding: chunked
400: Client to Server (126 bytes)
78
--xxxxyyyyzzz
Content-Disposition: form-data; name="pic"; filename="david.jpg"
Content-Type: application/octet-stream
400: Connected to Server
400: Server to Client (112 bytes)
HTTP/1.1 100 Continue
Server: Microsoft-IIS/5.1
Date: Wed, 23 Mar 2005 00:47:02 GMT
X-Powered-By: ASP.NET
Any help will be appreciated,
David -
How to deal with NULL when using OCI
I tried to select some data from a table, and some of them are
null.
Can anybody tell me how to deal with the null value?
Thanks!See the documentation on 'indicator variables'. This is the
correct way of dealing with reading and/or writing NULL data
for a table. -
How to deal with credentials for external applications using a Java Client/
Hi Guys,
This is the case. I am integrating an external application with an ADF Application. I have implemented some programmatic ViewObjects that are being filled up by a REST Java Client Wrapper. Everything is working fine but the issue is that the credentials the wrapper is using are hard coded inside the java class. I am thinking to ask for the credentials at the beginning of my taskflow and then store them somewhere and use them then to create my client wrapper (passing them in the constructor).
However, I don't know if my approach is good and I would like you to share your experiences or how to deal with this.
RegardsYou can use Credential Store Framework to store the credentials securely in the weblogic server instead of hardcoding in the java class.
The Credential Store Framework:
- enables you to manage credentials securely
- provides an API for storage, retrieval, and maintenance of credentials in different back-end repositories
Check the documentation on CSF API -
http://docs.oracle.com/cd/E29505_01/core.1111/e10043/devcsf.htm
Major Steps -
1. Create a credential map and key in em console to store the password (http://docs.oracle.com/cd/E25054_01/core.1111/e10043/csfadmin.htm)
2. Use CSF API to retrieve the stored password
3. In jazn-data.xml give permissions to access CSF key and map -
How to deal with file(for example .xml)? what format of dir should be?
I'd like to operate the file in disk, and want to use relative directory?
How to deal with file dir? what format of dir should be?Hi Kamlesh,
Thanks for your response.
Actually, In the "Process External Bank Statement" window, i see that there are few entries which is for the previous year and which has not been reconciled. I have never worked practically on BRS and hence, i am scared to make any changes in the clients database without being confident on what i am doing. I need to reconcile for one of their Bank a/c for the month of April '08. I have the copy of the statements for the month ending 31st Mar 08 and 30th Apr 08. The closing balances are as below:
31/03/08 - 2300000.00
30/04/08 - 3100000.00
Now my OB for Bank a/c for April '08 in SAP is 2300000.00 Dr.
When i go to External Bank Reconciliation - Selection Criteria Screen (Manual Reconciliation), here are the detail that i enter:
Last Balance: INR -7,000,000.00000 (Grayed out by the system)
Ending Balance: INR -3,100,000.00000 (Entered by me)
End Date: 30/04/08 (Entered by me)
"Reconciliation Bank Statement" Screen opens up and shows the below balances in the screen:
Cleared Book Balance: INR -7,000,000.00000
Statement Ending Balance: INR -3,100,000.00000
Difference: INR 3,800,000.00000
As per the Bank statement, i have found all the transactions listed out here for the month of Apr '08 but, i also found that the open transactions for the previous month from April '08 have been lying in "Process External Bank Statement" window.
Could you please help me solve my issue as to what needs to be done or could you also get me some links from where i can get few documents for processing External Bank Reconciliations?
That will be of a great help for me. I need steps as to what needs to be done first and then the next so that i can arrive at the correct closing balance for the month April '08.
Thanks in Advance....
Regards,
Kaushal -
How to deal with special character in source file
Hi experts,
i am doing a file to file scenario in which my source file contains many special characters when i am puting this file into moni its going with the special character .My source file is a fixed length file so in content conversion i have specified the file length but due to these special charcters these field lenght is also varing.So please guide me how to deal with these special characters in sender adapter
regards,
Saurabhyou could try using a Java Mapping to change the encoding manually. For that, set the encoding of the OutputFormat of the XML you'll serialize. Try the following code piece for the mapping (inside a try/catch declaration):
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder documentBuilder = factory.newDocumentBuilder();
Document input = documentBuilder.parse(in);
OutputFormat format = new OutputFormat(XML, "ISO-8859-1", false);
XMLSerializer serializer = new XMLSerializer(out, format);
For more details check this guide:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42 -
How to deal with images stored in oracle
hi,
can anyone help me to solve this issue please:
in fact i am developping a swing based standalone application based on a TCP/IP client-server connection, so the point is to display on my frame for each student his information and also his personal picture
first step : storing the personal picture into the oracle database from a specefic frame that allows to specify each NEW student's profile and his photo.
step 2: as needed, a specefic frame allows to retrieve all the information related to a student and his photo to ( in a jlabel or other swing componenet)
how to deal with this storing and then the retriving from the oracle DB
any help please!If I understand well your problem, you need your client java application to store and retrive information from an oracle DB.
This can be done via JDBC.
Here's the tutorial:
http://java.sun.com/developer/onlineTraining/Database/JDBC20Intro/JDBC20.html
Look at
http://java.sun.com/developer/onlineTraining/Database/JDBC20Intro/JDBC20.html#JDBC2018
for storing and retriving binary data (like java serialized objects (Images for example)) -
How to deal with 0...n or 1...n mappings?
Hi all,
I'm relatively new to BPM. I've already made several processes that are working fine. However, I'm now stuck because, for the life of me, I'm not able to understand how to deal with data mappings of nodes of a cardinality greater than 1...1.
I'm used to dealing with 0...n inputs of Web Services in Webdynpro Java and CAF Application services, using java code... but I just don't get how to deal with these in a BPM data mapping scenario.
Let's say you have a Web Service whose input is a node called "Employee", where you can add n Employee objects, like this:
Employee (0...n)
FirstName: String
LastName: String
How do you:
1- map a 0...n context node from a Web Dynpro or Web Service output, already containing several employees, into this Employee 0...n WS input node?
2- map a 0...n context node from a Web Dynpro or Web Service output, containing NO employees, into this Employee 0...n WS input node, without getting an error and the BPM crashing because it says the employee element is not found?
Hopefully someone can help me with this, because I'm about to go the way of calling the web service n times, one employee at a time, instead of one time with a Employee object with n registries in it.
Thanks!Hi Abhijeet,
i think i should have mentioned this earlier. My Employee node is inside another node. So, the actual input structure of the WS is this:
InputValues (1...1)
Employees (0..n)
FirstName: String
LastName: String
This scenario works OK in BPM when mapping a 0..n node using deep copy as Jocelyn explained, but I still doesn't work if I want to pass an empty (not null) Employees array.
If I go to the WS Navigator and run this WS with the following parameters, it runs ok (I get an output message saying no employee was selected, which is how it should work):
InputValues - "Is null" checkbox not activated.
Employees - "Skip" checkbox not activated
FirstName - "Skip" checkbox activated
LastName - "Skip" checkbox activated
However, if instead of activating the checkbox of, say, "FirstName", I enter a "" value, I get an error from the WS saying that's not a vaild first name, which is also how it should work.
In java code, I would just pass an empty InputValues object to the WS, but I'm not sure how to do this in a BPM without it being considered null, and without having to set on of its String-child values to "".
Do you know how to achieve this? -
How to deal with generated programs in eCATT SAPGUI recording?
Hi experts and professionals,
I am trying to automate testing of our solutions by eCATTs and so far i have not been able to find solution for following problem.
Whole test scenario is very simple:
Check InfoProvider data (query, lookup, listcube,...)
Create DAP on InfoProvider
Archive InfoProvider
Check InfoProvider data (query, lookup, listcube,...) again
Compare results from step 1. and 4. (must match)
Reload archived data
Check InfoProvider data (query, lookup, listcube,...) again
Compare results from step 1. and 7. (must match)
As you can see, one of the required test steps is to check InfoProvider's data in transaction LISTCUBE.
But transaction LISTCUBE generates its program "name" every time it is executed and
I am struggling to find a way how to deal with these generated programs in eCATT SAPGUI recording.
Key is that solution must be generic and work for all SAP BW releases from 7.0 upwards
(having in mind that LISTCUBE can read NLS data from SAP BW 7.3 release).
Error description from eCATT log:
Screen Check Error: Expected Transaction: LISTCUBE, Actual Transaction: LISTCUBE.
Expected Program: GP0KOZE7EFIUBN10MZUFWX90W80, Actual Program: GPBP24INA6VV77SL0XKU5NA642O.
Expected Screen Number: 1000, Actual Screen Number: 1000.
There Is Probably an Error in SAPGUI recording.
ExceptionClass:CX_ECATT_APL_CAPTURE ExceptionId:SCREEN_CHECK_ERROR
RaisingClass:CL_APL_ECATT_LINE_INTERPRETER Include:CL_APL_ECATT_LINE_INTERPRETER=CM00J Line:443
Is there any way how to avoid program check in eCATT script?
Anything that would help me to find solution will be greatly appreciated.
Best Regards,
IgorDear Igor,
Your issue is caused by the "screen check" which eCATT processes here.
In General this screen check is a very usefull activity, since is ensures that only those screens are processed by automation, which initially where recorded. This should ensure as much as possible to invoke only intended activities.
Remember, that the driver of the screen flow is still the automated transaction program ( but not the test tool). So application logic decides which screen is send next.
Using screen check the test tool tries to ensure that menu items and buttons and other activities are only automated when the tool "believes" to work on the intended screen.
For generic test scripts and often in context of generated programs the screen check might hurt a bit.
To overcome this, one might try to make the check dynamic (as Sheetal suggests correctly).
If here the name of program cannot be determined for any reason, one can use another method and do following:
- Change the value of ProcessedScreen-Active to 'R'
This will disable/skip the screen-check for this ProcessedScreen.
Sure the solution includes a certain risk, since not checking the correct screen to appear might lead to automation of actions with not desired impact.
Maybe this can improve your solution.
Kind Regards
Jens -
How to deal with this problem?
How to deal with this problem?
We plan to use Oracle Coherence (In Memory Data Grid) for a large-scale application. In order to keep the database table data in Coherence caches, we will create all the corresplonding Java objects (entities) and construct the persistence system using JPA/EclipseLink+JDBC. In this way, any in-memory object update will be persisted to the corresponding database tables.
The problem is that some existing application codes are updating these database tables directly now. If the direct-database-table update is not permmited in the persistence environment, we have to discard most of the existing application scripts.
I want to know, in this situation, should I discard most of the existing scripts?
Are there any other solutions?Allowing writes from both cache & DB is possible with its own set of issues.
The main issue to consider is conflicts from updates on same record via both cache and DB. If your caches are write-through the conflict decreases - but then cache writes become slower. If your caches are write-behind potentially the older cache update will overwrite the latest DB update. Now you are back to Database 101 -- timestamps, versions, etc...
If you use a DB trigger to initiate the resync request you might want to distinguish whether the update has come from the cache-store (in which case, you may choose to do nothing), or if the update was from the 'existing apps', etc...
If you choose to inject the resync logic at the application code level - you have the usual sourcecode issues - can you modify the code, is all the DB code localized, what options do you have to link with Coherence functionality (DLL, external proc, webservice, etc), etc... Naturally though, if you have to make substantial changes to signal a resync....you might consider taking the extra step and change the code to write to the cache. -
How to deal with java integrity??
Hi everyone,
it's just few months that I have started to use Java and since, I've been really confused how to use it. As I've been using C++ before, it's really easy for me to handle a few libraries and keywords and write every thing all by my own. But in Java, TOOOO many libraries and keywords all with different procedures and different cases that really bothers me and I can't understand how to deal with all these. As an example, When I just want to start and write a program, I start searching the net and wow. too many different classes and different Keywords that I get too confused and I prefer not to continue. Would someone please help me and tell me how to find a solution for this essential problem???asker wrote:
I've been using C++ before
But in Java, TOOOO many keywords Really? C++ has 63 keywords:
asm
auto
bool
break
case
catch
char
class
const
const_cast
continue
default
delete
do
double
dynamic_cast
else
enum
explicit
export
extern
false
float
for
friend
goto
if
inline
int
long
mutable
namespace
new
operator
private
protected
public
register
reinterpret_cast
return
short
signed
sizeof
static
static_cast
struct
switch
template
this
throw
true
try
typedef
typeid
typename
union
unsigned
using
virtual
void
volatile
wchar_t
while while Java has 52, 2 of which (goto and const) are not used:
abstract
assert
boolean
break
byte
case
catch
char
class
const
continue
continue
default
do
double
else
enum
extends
final
finally
float
for
for
goto
if
implements
import
instanceof
int
interface
long
native
new
package
private
protected
public
return
short
static
strictfp
super
switch
synchronized
this
throw
throws
transient
try
void
volatile
while -
How you deal with the situation if the vendor/supplier is also a customer ?
Dear All,
Could you please help me with the idea on how to deal with the suppliers/vendors who also are customers to you in MM?
Thanks in advance,
RanjanIt depends what you are intentions are..
However
1. Create a customer master record for that vendor that is also a customer.
2. Enter Vendor number in Vendor field in control data-customer master record.
3. Enter customer number in Customer field in Control data- vendor master record
Finally, Check "Clrg with vend." field or "Clrg with cust." field
Maybe you are looking for
-
I, unfortunately, am still running many mission-critical Forms5/Reports3 (patch 11) 'fat client' applications against a 9.2 database. Don't laugh: these applications still run fine on Windows 7 64bit client PC's. I need to replace the server, so I ha
-
Identifying users on iChat witin school network
is there any way to identify a user who is impersonating a teacher on a school's network using iChat/Bonjour? Apple Remote Desktop is available, however I cannot think of any way to isolate who the problematic student is. They have changed their addr
-
I have an iPod mini (4GB) and an iPod color (60 GB). What will happen when I connect them to the PC if there are more songs on the computer than the 4 GB, the mini can hold ?
-
Switching b/w Word Processing & Page Layout
Is there a way to transfer from a doc in WP-mode to Page Layout mode? Sorry for this very obvious question, but I can seem to find any info in the documentation or on the forum. Any help is, of course, massively appreciated. Cheers!
-
MAC BOOK PRO 17 Build 2009 Random Freezing
Hello people! I was working with my mac at work editing some videos when suddenly everything just stop working and i had to turn it off only by holding the power button... Mouse froze, keyboard not responding But i noticed backlight was responding if