Define hashed table using database table
Hi,
I have a database table and would want to define a hashed table using this table structure, how would I do that?
Thanks
RT
Hi Rob,
The syntax is as follows,
DATA ITAB TYPE HASHED TABLE OF SPFLI
WITH UNIQUE KEY CARRID CONNID.
The table object ITAB has the type hashed table, a line type corresponding to the flat structure SPFLI from the ABAP Dictionary, and a unique key with the key fields CARRID and CONNID. The internal table ITAB can be regarded as an internal template for the database table SPFLI. It is therefore particularly suitable for working with data from this database table as long as you only access it using the key.
Similar Messages
-
Getting runtime error while using hash table
Hi,
I have defined an internal table as hash with unique key.But while executng the prog. its giving a dump saying "There is already a line with the same key." My code is
data: begin of wa_rkrp,
vbeln like vbrk-vbeln,
fkdat like vbrk-fkdat,
fkart like vbrk-fkart,
kunag like vbrk-kunag,
knumv like vbrk-knumv,
inco1 like vbrk-inco1,
spart like vbrk-spart,
netwr like vbrk-netwr,
mwsbk like vbrk-mwsbk,
uepos like vbrp-uepos,
werks like vbrp-werks,
lgort like vbrp-lgort,
end of wa_rkrp.
data lt_rkrp like hashed table of wa_rkrp
with unique key vbeln
with header line.
select vbrk~vbeln
vbrk~fkdat
vbrk~fkart
vbrk~kunag
vbrk~knumv
vbrk~inco1
vbrk~spart
vbrk~netwr
vbrk~mwsbk
vbrp~uepos
vbrp~werks
vbrp~lgort
into table lt_rkrp
from vbrk inner join vbrp
on vbrpvbeln = vbrkvbeln
where vbrk~fkdat in s_fkdat
and vbrk~bukrs eq p_bukrs.
Any problem in my select query? or with my table deifnition.
Can anyone pls suggest how to rectify this.define a unique key VBELN and POSNR.
data lt_rkrp like hashed table of wa_rkrp
with unique key vbeln posnr
with header line.
BTW: Stop using the header line!!! Outdated!!
Edited by: Micky Oestreich on Mar 23, 2009 7:28 AM -
How to define a hashed table?
Hi..
I want to know that how we can define a hash table in ABAB.
And what are the advantages of that table?
Thanksonce you have data in your internal table, there is not much of a performance issue...unless of course it contains a huge number of entries...
i m not aware of such a possibility that an internal table can behave as both sorted and hashed...
if you go for a hashed table, the response time for your search will always be constant, regardless of the number of table entries....this is because the search uses a hash algorithm...u must specify the UNIQUE key for hashed tables.
just go thru this link for some more information...
http://www.sap-img.com/abap/what-are-different-types-of-internal-tables-and-their-usage.htm
read this...
Standard tables are managed system-internally by a logical index. New rows are either attached to the table or added at certain positions. The table key or the index identify individual rows.
Sorted tables are managed by a logical index (like standard tables). The entries are listed in ascending order according to table key.
Hashed tables are managed by a hash algorithm. There is no logical index. The entries are not ordered in the memory. The position of a row is calculated by specifying a key using a hash function.
Sorted tables store records in a "sorted" fashion at all times. It is faster to search through a sorted table vs a standard table. But performance is dictated by the amount of records in the internal table.
A hashed table's performance in reads is NOT dependent on the number of records. However, it is intended for reads that will return only and only one record. It uses a "side-table" with a hash algorithm to store off the physical location of the record in the actual internal table. It is not NECESSARILY sorted/organized in an meaningful order (like a sorted table is). Please note that changes to a hashed tables records must be managed carefully. Review SAP's on-help in SE38/80 about managing hashed tables.
TYPES: BEGIN OF TY_ITAB,
FIELD1 TYPE I,
FIELD2 TYPE I,
END OF TY_ITAB.
TYPES ITAB TYPE SORTED TABLE OF TY_ITAB WITH UNIQUE KEY FIELD1.....
FOR PROPER SYNTEX F1 HELP.... -
Add rows to a Table object without using database .
Hi everyone ,
I'm using visual web javaServer faces framework in Netbeans IDE 6.5.I want to use a Table and bind data to it row by row ,without using database.I dont know how i must do it. i searched and find nothing exept dataprovider that as i undrestood, it use just database to bind data to a table.
i have a string in a loop that i want to devide it to several substrings and these substrings will be each field of each row and at the end of the loop,the number of rows are equal to reapiting the loop.
I'll appreciate any help in this matter
Best Regard,
FatemehMy Dataout variable contains a long string of personal information. Information such as Name, Family name, Fathers name, Date of birth, National code, Nationality.
I want to make a personal information table with above mentioned information, whereas Name, Family name, Fathers name, Date of birth, National code, Nationality will be my column values. The number of its row depends on the times that the loop is repeated.
I can make a JTable by defining an array[6] as above, but this table doesnt appear in Visual Web javaServer faces framework. I want to be able to do what JTable does in the Table component in visual designer. Can anybody help me?
Thanks in advance -
How about use partial key to loop at a hashed table?
Such as I want to loop a Internal table of BSID according to BKPF.
data itab_bsid type hashed table of BSID with unique key bukrs belnr gjahr buzid.
Loop at itab_bsid where bukrs = wa_bkpf-bukrs
and belnr = wa_bkpf-belnr
and gjahr = wa_bkpf-gjahr.
endloop.
I know if you use all key to access this hashed table ,it is certainly quick, and my question is when i use partial key of this internal hashed table to loop it, how about its performance.
Another question is in this case(BSID have many many record) , Sorted table and Hashed table , Which is better in performance.You can't cast b/w data reference which l_tax is and object reference which l_o_tax_code is.
osref is a generic object type and you store a reference to some object in it, right? So the question is: what kind of object you store there? Please note - this must be an object reference , not data reference .
i.e
"here goes some class
class zcl_spfli definition.
endclass.
class zcl_spfli implementation.
endclass.
"here is an OBJECT REFERENCE for it, (so I refer to a class) i.e persistent object to table SPFLI
data oref_spfli type ref to zcl_spfli.
"but here I have a DATA REFERENCE (so I refer to some data object) i.e DDIC structure SPFLI
data dref_spfli type ref to spfli.
So my OSREF can hold only oref_spfli but it not intended for dref_spfli . That's why you get this syntax error. Once you have stored reference to zcl_spfli in osref then you will be able to dereference it and access this object's attributes.
data: osref type osref.
create object osref_spfli.
osref = osref_spfli.
"now osref holds reference to object, you can deference it
oref_spfli ?= osref.
osref_spfli->some_attribute = ....
OSREFTAB is just a table whose line is of type OSREF (so can hold multiple object references - one in each line).
Regards
Marcin -
Hello,
I am trying to just get the Managers of my users in Active Directory. I have gotten it down to the user and their manager, but I don't need the user. Here is my code so far:
Get-ADUser-filter*-searchbase"OU=REDACTED,
OU=Enterprise Users, DC=REDACTED, DC=REDACTED"-PropertiesManager|SelectName,@{N='Manager';E={(Get-ADUser$_.Manager).Name}}
|export-csvc:\managers.csv-append
Also, I need to get rid of the duplicate values in my hash table. I tried playing around with -sort unique, but couldn't find a place it would work. Any help would be awesome.
Thanks,
MattI would caution that, although it is not likely, managers can also be contact, group, or computer objects. If this is possible in your situation, use Get-ADObject in place of Get-ADUser inside the curly braces.
Also, if you only want users that have a manager assigned, you can use -LDAPFilter "(manager=*)" in the first Get-ADUser.
Finally, if you want all users that have been assigned the manager for at least one user, you can use:
Get-ADUser
-LDAPFilter "(directReports=*)" |
Select @{N='Manager';E={ (Get-ADUser
$_.sAMAccountName).Name }}
-Unique | Sort Manager |
Export-Csv .\managerList.csv -NoTypeInformation
This works because when you assign the manager attribute of a user, this assigns the user to the directReports attribute of the manager. The directReports atttribute is multi-valued (an array in essence).
Again, if managers can be groups or some other class of object (not likely), then use Get-ADObect throughout and identify by distinguishedName instead of sAMAccountName (since contacts don't have sAMAccountName).
Richard Mueller - MVP Directory Services -
Question on the use of hash tables
I have created a extract program that extract data from the bkpf and the bseg tables. I am extracting a lot of data which is needed for the auditors and depending on the selection criteria (company code and date range), this extract takes quite awhile to run. I had heard from another developer working in a different project that hash tables are used when dealing with a lot of data. I am not that familiar with hash tables and was wondering if the hash table approach would help with the processing time of my process.
thanks in advance for the helpthis is only part of the code but this is the part when the selects and the writing of the file are. let me know if I have to post the entire program.
FORM f_get_data .
SELECT * INTO TABLE wt_bkpf
FROM bkpf
WHERE bukrs IN s_bukrs
AND belnr IN s_belnr
AND blart IN s_blart
AND bldat IN s_bldat
AND budat IN s_budat
AND bstat IN s_bstat.
IF sy-dbcnt IS INITIAL.
MESSAGE i208(00) WITH text-001.
STOP.
ENDIF.
SORT wt_bkpf BY bukrs belnr gjahr.
SELECT mandt bukrs belnr buzei buzid bschl koart shkzg dmbtr
wrbtr pswbt sgtxt kostl saknr hkont dmbe2
INTO TABLE wt_bseg
FROM bseg
FOR ALL ENTRIES IN wt_bkpf
WHERE bukrs EQ wt_bkpf-bukrs
AND belnr EQ wt_bkpf-belnr.
ENDFORM. " f_get_data
FORM f_split_data .
DATA wlv_index LIKE sy-tabix.
DESCRIBE TABLE wt_bkpf LINES wv_index.
wlv_index = 0.
wv_item_index = 1.
WHILE wlv_index LT wv_index.
ADD 1 TO wlv_index.
CLEAR wt_bkpf.
READ TABLE wt_bkpf INDEX wlv_index.
IF NOT sy-subrc IS INITIAL. EXIT. ENDIF.
LOOP AT wt_bseg FROM wv_item_index
WHERE bukrs EQ wt_bkpf-bukrs
AND belnr EQ wt_bkpf-belnr.
wv_item_index = sy-tabix + 1.
move wt_bkpf-bukrs to ws_bseg_hold-bukrs.
move wt_bkpf-belnr to ws_bseg_hold-belnr.
move wt_bkpf-gjahr to ws_bseg_hold-gjahr.
move wt_bkpf-blart to ws_bseg_hold-blart.
move wt_bkpf-bldat to ws_bseg_hold-bldat.
move wt_bkpf-budat to ws_bseg_hold-budat.
move wt_bkpf-monat to ws_bseg_hold-monat.
move wt_bkpf-cpudt to ws_bseg_hold-cpudt.
move wt_bkpf-cputm to ws_bseg_hold-cputm.
move wt_bkpf-usnam to ws_bseg_hold-usnam.
move wt_bkpf-tcode to ws_bseg_hold-tcode.
move wt_bkpf-xblnr to ws_bseg_hold-xblnr.
move wt_bkpf-bktxt to ws_bseg_hold-bktxt.
move wt_bkpf-waers to ws_bseg_hold-waers.
move wt_bkpf-bstat to ws_bseg_hold-bstat.
move wt_bkpf-ausbk to ws_bseg_hold-ausbk.
move wt_bseg-mandt to ws_bseg_hold-mandt.
move wt_bseg-buzei to ws_bseg_hold-buzei.
move wt_bseg-buzid to ws_bseg_hold-buzid.
move wt_bseg-bschl to ws_bseg_hold-bschl.
move wt_bseg-koart to ws_bseg_hold-koart.
move wt_bseg-shkzg to ws_bseg_hold-shkzg.
move wt_bseg-dmbtr to ws_bseg_hold-dmbtr.
move wt_bseg-wrbtr to ws_bseg_hold-wrbtr.
move wt_bseg-pswbt to ws_bseg_hold-pswbt.
move wt_bseg-sgtxt to ws_bseg_hold-sgtxt.
move wt_bseg-kostl to ws_bseg_hold-kostl.
move wt_bseg-saknr to ws_bseg_hold-saknr.
move wt_bseg-hkont to ws_bseg_hold-hkont.
move wt_bseg-dmbe2 to ws_bseg_hold-dmbe2.
APPEND ws_bseg_hold TO wt_bseg_output.
ENDLOOP.
ENDWHILE.
ENDFORM. " f_split_data -
Error occured while provisioning using Database Application Table connector
Hi,
I am trying to provision using Database Application Table connector and OIM 10g. Provisioning is successful but the child data is not sitting in the database. Getting the below error
Response: GCPROV.ProvTransportProvider.DBProvisioningTransport.The column key_id does not exist in the target
Response Description: Unknown response received
Notes:
Assigned to Group : SYSTEM ADMINISTRATORS
Error Details
Setting task status... "GCPROV.ProvTransportProvider.DBProvisioningTransport.The column key_id does not exist in the target" does not correspond to a known Response Code. Using "UNKNOWN".
Please help
Thanks in advance
SahanaMake sure that your DataSource is in running state when you create the connection pool in DBAdapter. You may restart the server or recreate the connection pool in DBAdapter.
Regards,
Anuj -
How to use database look up table function in xsl mapping
Can anybody tell me how to use database look up table function while mapping xsl between 2 nodes.
I have an XML file coming in and depending on one of XML elements we need to decide which further path to take. But, using this XML element, we need to query database table, get metadata and accordingly take appropriate path. I have written lookup function which returns metadata value.
Now, the issue is how do I pass the XML element valu as input to look up function? When I tried to drag it to the input node of lookup function, it throws an error like "Maximum number of parameters exceeded"
Thanks,If the lookup table is always going to remain the same (e.g. a character generator or something similar) you can place the values in a 2D array constant on your diagram, with the input value as one column, the equivalent as the other. When you need to perform the lookup you use an index array to return all the values in the "input column", search it using "search 1D array" and use the resulting index number to index the other column's data. If the values may change, then it would probably be best to load an array control with your equivalent values from a file.
P.M.
Putnam
Certified LabVIEW Developer
Senior Test Engineer
Currently using LV 6.1-LabVIEW 2012, RT8.5
LabVIEW Champion -
How Hash tables can be used in PI mapping
Hi Experts,
I'm don't have any idea how we store the values in hash tables and how to implement them in mapping.
In my scenario I have two fields matnum and quantity.if matnum is not null ,then we have to check whether the matnum exists in hash table and also check whether the hash table is empty or not.
How we can do this in graphical message mapping?
how to store the variable matnum in a table?
If global variables are used, how to implement in mapping.how we call the keys from hash table ?Divya,
We have a similiar requirement for getting different values. Below param1 may you be matnum,param2 is quantity
What you need to do is first declare global varaible(A), fill hash table as below(B) and retrieve(C) based on index. You can tweak code based on your requirement
(A) Declare global variable(last icon in message mapping tool bar)
String globlalString[] = new String[10];
(B) Fill Hash Table
import java.util.Hashtable;
public void saveparam1(String[] param1,String[] param2,ResultList result,Container container){
Hashtable htparam1 = new Hashtable();
int Indx = 0;
for (int i = 0 ;i < param1.length ; i++) {
String strparam1 = param1<i>.trim();
if (strparam1.length() > 0) {
Object obj = htparam1.get(strparam1);
if (obj == null){
globlalString[Indx++] = strparam1 ;
htparam1.put(strparam1,strparam1);
if (Indx < globalString.length) {
for (int i = 0; i < param2.length ; i++) {
String strparam2 = param2<i>.trim();
if (strparam2.length() > 0) {
Object obj = htparam1.get(strparam2);
if (obj == null){
globalString[Indx++] = strparam2 ;
htparam1.put(strparam2,strparam2);
result.addValue(globalString[0]); // for first value
(C) for subsequent reading/accessing
//pass constant whatever number is required to this function
String retValue = "";
int indx = Integer.parseInt(index);
indx = indx - 1;
if ((indx >= 0) && (indx < globalString.length)){
retValue = globalString[indx];
return retValue;
Hope this helps! -
Header, Line Item and Cache Techniques Using Hashed Tables
Hi,
How can I work with header, line item, and a cache techniques using hashed tables?
Thanks,
Shah.Hi,
Here is an example to clarify the ideas:
In general, every time you have a header-> lines structure you have a unique key for the lines that has at least header key plus one or more fields. I'll make use of this fact.
I'll try to put an example of how to work with header -> line items and a cache technique using hashed tables.
Just suppose that you need a list of all the material movements '101'-'901' for a certain range of dates in mkpf-budat. We'll extract these fields:
mkpf-budat
mkpf-mblnr,
mseg-lifnr,
lfa1-name1,
mkpf-xblnr,
mseg-zeile
mseg-charg,
mseg-matnr,
makt-maktx,
mseg-erfmg,
mseg-erfme.
I'll use two cache: one for maintaining lfa1 related data and the other to maintain makt related data. Also I'll only describe the data gathering part. The showing of the data is left to your own imagination.
The main ideas are:
1. As this is an example I won't use inner join. If properly desingned may be faster .
2. I'll use four hashed tables: ht_mkpf, ht_mseg, ht_lfa1 and ht_makt to get data into memory. Then I'll collect all the data I want to list into a fifth table ht_lst.
3. ht_mkpf should have (at least) mkpf's primary key fields : mjahr, mblnr.
4. ht_mseg should have (at least) mseg primary key fields: mjahr mblnr and zeile.
5. ht_lfa1 should have an unique key by lifnr.
6. ht_makt should have an unique key by matnr.
7. I prefer using with header line because makes the code easier to follow and understand. The waste of time isn't quite significant (in my experience at least).
Note: When I've needed to work from header to item lines then I added a counter in ht_header that maintains the count of item lines, and I added an id in the ht_lines so I can read straight by key a given item line. But this is very tricky to implement and to follow. (Nevertheless I've programmed it and it works well.)
The data will be read in this sequence:
select data from mkpf into table ht_mkpf
select data from mseg int table ht_mseg having in count all the data in ht_mkpf
loop at ht_mseg (lines)
filter unwanted records
read cache for lfa1 and makt
fill in ht_lst and collect data
endloop.
tables
tables: mkpf, mseg, lfa1, makt.
internal tables:
data: begin of wa_mkpf, "header
mblnr like mkpf-mblnr,
mjahr like mkpf-mjahr,
budat like mkpf-budat,
xblnr like mkpf-xblnr,
end of wa_mkpf.
data ht_mkpf like hashed table of wa_mkpf
with unique key mblnr mjahr
with header line.
data: begin of wa_mseg, " line items
mblnr like mseg-mblnr,
mjahr like mseg-mjahr,
zeile like mseg-zeile,
bwart like mseg-bwart,
charg like mseg-charg,
matnr like mseg-matnr,
lifnr like mseg-lifnr,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
end of wa_mseg,
data ht_mseg like hashed table of wa_mseg
with unique key mblnr mjahr zeile
with header line.
data: begin of wa_lfa1,
lifnr like lfa1-lifnr,
name1 like lfa1-name1,
end of wa_lfa1,
data ht_lfa1 like hashed table of wa_lfa1
with unique key lifnr
with header line.
data: begin of wa_makt,
matnr like makt-matnr,
maktx like makt-maktx,
end of wa_makt.
data: ht_makt like hashed table of wa_makt
with unique key matnr
with header line.
result table
data: begin of wa_lst, "
budat like mkpf-budat,
mblnr like mseg-mblnr,
lifnr like mseg-lifnr,
name1 like lfa1-name1,
xblnr like mkpf-xblnr,
zeile like mseg-zeile,
charg like mseg-charg,
matnr like mseg-matnr,
maktx like makt-maktx,
erfmg like mseg-erfmg,
erfme like mseg-erfme,
mjahr like mseg-mjahr,
end of wa_mseg,
data: ht_lst like hashed table of wa_lst
with unique key mblnr mjahr zeile
with header line.
data: g_lines type i.
select-options: so_budat for mkpf-budat default sy-datum.
select-options: so_matnr for mseg-matnr.
form get_data.
select mblnr mjahr budat xblnr
into table ht_mkfp
from mkpf
where budat in so_budat.
describe table ht_mkpf lines g_lines.
if lines > 0.
select mblnr mjahr zeile bwart charg
matnr lifnr erfmg erfme
into table ht_mseg
from mseg
for all entries in ht_mkpf
where mblnr = ht_mkpf-mblnr
and mjahr = ht_mjahr.
endif.
loop at ht_mseg.
filter unwanted data
check ht_mseg-bwart = '101' or ht_mseg-bwart = '901'.
check ht_mseg-matnr in so_matnr.
read header line.
read table ht_mkpf with table key mblnr = ht_mseg-mblnr
mjahr = ht_mseg-mjahr.
clear ht_lst.
note : this may be faster if you specify field by field.
move-corresponding ht_mkpf to ht_lst.
move-corresponding ht_mseg to ht_lst.
perform read_lfa1 using ht_mseg-lifnr changing ht_lst-name1.
perform read_makt using ht_mseg-matnr changing ht_lst-maktx.
insert table ht_lst.
endloop.
implementation of cache for lfa1.
form read_lfa1 using p_lifnr changing p_name1.
read table ht_lfa1 with table key lifnr = p_lifnr
transporting name1.
if sy-subrc <> 0.
clear ht_lfa1.
ht_lfa1-lifnr = p_lifnr.
select single name1
into ht_lfa1-name1
from lfa1
where lifnr = p_lifnr.
if sy-subrc <> 0. ht_lfa1-name1 = 'n/a in lfa1'. endif.
insert table ht_lfa1.
endif.
p_name1 = ht_lfa1-name1.
endform.
implementation of cache for makt
form read_makt using p_matnr changing p_maktx.
read table ht_makt with table key matnr = p_matnr
transporting maktx.
if sy-subrc <> 0.
ht_makt-matnr = p_matnr.
select single maktx into ht_matk-maktx
from makt
where spras = sy-langu
and matnr = p_matnr.
if sy-subrc <> 0. ht_makt-maktx = 'n/a in makt'. endif.
insert table ht_makt.
endif.
p_maktx = ht_makt-maktx.
endform.
Reward points if found helpfull...
Cheers,
Siva. -
Thread pool and use database table as queue
is this possible to use database table as queue rather than using "LinkedBlockingQueue"..and store in memory ? If yes, how do i serialized the task object into table and how to retrieve the task object back when need to be execute by executorservice..?
cometta wrote:
is this possible to use database table as queue rather than using "LinkedBlockingQueue"..and store in memory ? If yes, how do i serialized the task object into table and how to retrieve the task object back when need to be execute by executorservice..?From the sound of this I think BDLH is on the right track. I would think in terms of messaging with JMS. There is one potential pitfall you may have to work around. JMS does not guarantee the order in which messages will be extracted from the queue. If the order of messages is important that is a problem you'll have to solve.
PS. -
Validation by using Database Table Values
Dear Experts,
Please suggest us how can we validate the user input values with our database table values in ADF.
thanks in advance
Shiv NareshTake a look at Edwin Biemonds blogpost.
Using database tables as authentication provider in WebLogic
http://biemond.blogspot.de/2008/12/using-database-tables-as-authentication.html -
Syntax error not defined as table or projection view or database view
All,
I have to select a Maintenance View V_tcurr when I do Select statement I am getting SYNTAX ERROR stating its not defined as table or projection view or database view in Abap dictionary.
Can you give me a hint, how do I select maintenance View in the select statement ?
Any help will be highly appreciated.
Thanksdata: lt_TCURR TYPE TABLE OF TCURR.
data: lv_TCURR TYPE TCURR.
SELECT * INTO TABLE lt_tcurr
FROM TCURR.
TCURR just has some extra fields that are exlcuded in the Maint View. -
Hello,
I am debating whether or not to use hash tables for something I need to do.. Here is the scenario
I was given a list of data, this data contains a string of indexes.
Throughout my program, I took that list of data and had to sort it. Now after I have done the calculations I needed, I need to re-output the
data from the original file in the same order, using some new information that I have retrieved.
Basically here is my question, should I iterate through the original file, searching the index per line, then do a manual search through my maniupulated sorted list which contains the information i want?
OR
Should I learn to use hashing, hash out the index's in the list, and hash the sorted list, and find matches? To be honest I'm not too sure how hashing works and how it can benefit.Don't worry about efficiency now. Worry about correctness. You're far more likely to make your program unusably incorrect by chasing efficiency, than by making it unusable inefficient by chasing correctness.
Anyway, I don't see how hashing has any relevance to this issue.
You could just create a list of courses when you read in the input, and then make another list for purposes of sorting. Then when you produce output, use the original list. Actually I'm not convinced that you even need to make that second sorted list -- the efficiency gains are probably minuscule or possibly even negative -- but whatever. Since the objects in both lists are the same, changes you make to the objects in the second list are present in the objects in the first list.
Maybe you are looking for
-
How can I access the Home Sharing Library from my iPad?
We have an ipad 2 with ios 7, and an imac with itunes 11.1.3 Just heard about home sharing and want to use this - surprised we haven't heard of it before home sharing is turned on in itunes (file > home sharing). home sharing is on for the ipad (sett
-
when trying to sync my ipod i get the error message: iTunes has detected an Ipod that appears to be corrupted. You may need to restore this Ipod before it can be used with iTunes. You may also try disconnecting an reconnecting the Ipod. and when i cl
-
I have over 20,000 photos in my iPhoto. When my screen saver comes on it randomly shows certain photos. Often, these photos are old and I in turn find the desire to print said photo. Is there a way the screen saver can convey a photo file name or
-
I have an iphone 5 and i just updated it to ios7 and it kept asking for codes and wouldnt accept any and now it just says iphone disconnected.please help
-
I know how to create a Tablespace , but I dont know how to lookup tablesspaces I can look up tables in dba_tables, how do I lookup existing tablespaces ? Thanks