Compressing Data Passed Through WebService
Hi there...
Before I start explaining the problem, I am not an expert in webservices and weblogic.
1- I am having a webservice that accepts lots of textual information and responds with lots of textual information as well. Is there an option in weblogic setting that allows data compression automatically? or should I implement data compression on the client and server?
2- also it seems that the parameters passed through the webservice get alot of XML overhead information. Is there a way to reduce the amount of overhead information passed?
Notice that SSL is being used.
3- Finally what are possible causes that could lead to slow response from the server? I am getting about 8 to 10 second average response time from the server. I don't think it is the weblogic server simply because the development environment uses local LAN and it the response is much faster. any ideas?
thanks
1- I am having a webservice that accepts lots of textual information and responds with lots of textual information as well. Is there an option in weblogic setting that allows data compression automatically? or should I implement data compression on the client and server?
Not that I know of, I think you have to resort to zipping the messages.
2- also it seems that the parameters passed through the webservice get alot of XML overhead information. Is there a way to reduce the amount of overhead information passed?
A way to reduce your XML overhead is to define small messages in your WSDL (and XSD).
3- Finally what are possible causes that could lead to slow response from the server? I am getting about 8 to 10 second average response time from the server. I don't think it is the weblogic server simply because the development environment uses local LAN and it the response is much faster. any ideas?
Network overhead. As you already mentioned in the other two question you are sending large messages. Maybe your system administrator has some monitoring tool for the network
which can give you some insight in the matter.
Information concerning WebLogic and Web Services can be found here: http://download.oracle.com/docs/cd/E21764_01/web.1111/e14529/web_services.htm
Similar Messages
-
How can I see how much data passes through my Time Capsule?
I am thinking of using a cellular data plan at home. My current, rural internet provider is slow and unreliable. I use a MiFi as a backup and have 4G service which is much faster and rarely goes down. I need to see how much data is downloaded and uploaded to compare costs. All our data passes through my Time Capsule.
A similar app to what Bob mentions is peakhour.. it works on any of the newer OS.
https://itunes.apple.com/au/app/peakhour/id468946727?mt=12
It is a good app. BUT.. fat ugly BUTT.. just the same as Bob has explained, it depends on SNMP to work.. and so due to apple removing a very useful and functional protocol from its airport range you can no longer use it. Bizarre.
I strongly recommend a Netgear WNDR3800 (older model now but you can pick up one cheaply on ebay) and a 3rd party firmware called gargoyle. Apple delete my posts if I point you to it, so you will have to search yourself.
Replace your tall TC with the Netgear as the main router.. bridge the TC to it and you can continue to use its wireless and for TM backups. The advantage is that gargoyle will not only measure everyones usage, by IP, it is able to set a quota on everyone using the net and you can set that quota for hourly, daily or weekly or monthly. It will track the usage and you can see at a glance what everyone has used.
It is simple to load.. just like a standard firmware update. The interface is as clear as anyone can make it with such of lot of tools. And the actual router is powerful enough to provide excellent QoS and parental controls on top of measurements and quota. -
Bex Query: make data pass through user exit calculation at navigation time
Hi all!
I have a new requirement and I don't know how to solve it...
Now, when I execute a web model containing a query, the system "reads" a date and calculate the query based on that date in a user exit defined in CMOD, for example, filtering data with an interval between january and the date read.
Besides, I have in the web model a dropdown item where user can choose other months. The dropdown item only shows single values but now if I choose a month, the query only shows data for that month.
I need the system filters the query with the new interval. For example, between january and the new month the user has just chosen.
Does anyone know a way to make a query pass through the user exit calculation after executing the query for the first time? Any other ideas? I need the query to "reexecute" and filter the data (create a new interval) based on the value a user chose.
(sorry about any inconvenience, because I posted the problem in another sdn specific forum but as I received no answer I've decide to explain it in here...)
Thank you! Points will be assigned.Any ideas please?
-
Dates passed through SQL to oracle backend database
Post Author: kasmith
CA Forum: WebIntelligence Reporting
Can anyone tell me how WEBI sends a date used as a parameter to an oracle database?
I am trying to increase the response time of a report, but cannot tell how web intelligence is sending the date to oracle.
Many thanks - KathyPost Author: amr_foci
CA Forum: WebIntelligence Reporting
kathy,, its all based on how you designed the universe which ur report based on, the webI is just sending the parameter as it to the universe i dont think the format will matter for this, but at the universe level this is point which u can control how u send the values to the database
good luck -
I just upgraded our env from 3.3.1 to 3.4.2-patch05.
I was intrigued by the following message in my log file.
INFO | jvm 1 | 2009/06/19 23:43:02 | [INFO ] 2009-06-19 23:43:02.053/7571.909 Oracle Coherence GE 3.4.2/411p5 <Info> (thread=Proxy:ExtendTcpProxyService33:TcpAcceptorWorker:1, member=10): The cache "XYZ" does not support pass-through optimization for objects in internal format. If possible, consider using a different cache topology.
I have no idea what it means...
Any information about this appreciated.
Thanks
SumaxHi Sumax,
starting with 3.4.0 it is possible to configure your cache services and invocation services to store the data internally and communicate over the network in the POF format instead of the old Java serialization/ExternalizableLite format (EL from now on).
Since from start the Coherence Extend clients used the POF format to communicate with the proxy, if EL was used as the serialization format within the service in the cluster, then the proxy node had to convert the data from EL to POF and vice versa. This required the deserialization of most data passing through the cluster (except for types that are known to both EL and POF... i.e. Strings and Java primitives). This was an expensive step in both CPU and memory terms.
On the other hand, if you configured your service to use POF as its serialization format, then this conversion step is no more necessary, and the proxy can pass through the data from the service to the client in a streaming manner so the CPU cost is negligible and the memory cost is just a much smaller buffer size. This, along with the related effects of POF being used within the TCMP cluster as a storage and communication format is one of the greatest performance improvements the 3.4 release introduced.
Obviously, if you just dropped in 3.4 jars instead of 3.3, then you did not configure the services to use POF.
To configure so, you first have to ensure that all classes which are sent over the network are properly configured for POF serialization (they implement PortableObject or have a corresponding serializer and the user type is properly registered in pof-config.xml). Then you can enable the POF serialization either on a service by service basis (using the <serializer> element in the service configuration within the coherence-cache-config.xml). You can enable the POF format for all services with a single Java property (I don't know it off my head) but it is safer to go service by service.
Best regards,
Robert -
Passing data through webservices.
Hi All,
I have two doubts.
1. Can we call a webservice without calling creating an ABAP Proxy?
2. If yes how can i pass my internal table through webservice.
any code snippet or wiki link regarding this will be very helpful.
TIA
Vikash SinghHi Vikash,
Go through the below link, it has a simple example which will help you,
just search for "Calling a web service in ABAP that validates an email id" and click on the link of saptechnicaldotcom
or try this
http://help.sap.com/saphelp_nw70ehp2/helpdata/EN/1f/93163f9959a808e10000000a114084/content.htm
Also, for transforming your internal table into XML, you can use the below piece of code
CALL TRANSFORMATION ID
SOURCE root = ITAB
RESULT XML xml_string.
And you can pass the XML_STRING as input to the "CREATE_BY_URL_METHOD".
But the XML_STRING will need to be in the format that the service can understand, you might have to modify it accordingly.
Regards,
Chen
Edited by: Chen K V on Apr 26, 2011 1:04 PM
Edited by: Chen K V on Apr 26, 2011 1:07 PM -
Data has changed after passing through FIFO?
Dear experts,
I am currently working on a digital triangular shaping using the 7966R FPGA + 5734 AI. I am using LabView 2012 SP1.
Some days ago I have encountered a problem with my FIFOs that I have not been able to solve since. I'd be glad if somebody could point out a solution/ my error.
Short description:
I am writing U16 variables between ~32700-32800 to a U16 configured FIFO. The FIFO output does not coincide with the data I have been writing to the FIFO but is rather bit-shifted or something is added. This problem does not occure if I execute the VI on the dev. PC with simulated input.
What I have done so far:
I am reading all 4 channels of the 5734 inside a SCTL. The data is stored in 4 feedback nodes I am applying a triangular shaping to channel 0 and 1 by using 4 FIFOs that have been prefilled with a predefined number of zeros to serve as buffers. So it's something like (FB = Feedback node):
A I/O 1 --> FB --> FIFO 1 --> FB --> FIFO 2 --> FB --> Do something
A I/O 2 --> FB --> FIFO 3 --> FB --> FIFO 4 --> FB --> Do something
This code shows NO weird behaviour and works as expected.
The Problem:
To reduce the amount of FIFOs needed I then decided to interleave the data and to use only 2 FIFOs instead of 4. You can see the code in the attachment. As you can see I have not really changed anything to the code structure in general.
The input to the FIFO is a U16. All FIFOs are configured to store U16 data.
The data that I am writing to the FIFO can be seen in channel 0 of the output attachment.
The output after passing through the two FIFOs can be seen in channel 2 of the same picture.
The output after passing through the first FIFO (times 2) can be seen in channel 3 of the picture.
It looks like the output is bit-shifted and truncated as it enters Buffer 1. Yet the difference between the input and output is not exactly a factor of 2. I also considered the possibility that the FIFO adds both write operations (CH0 + CH1) but that also does not account for the value of the output.
The FIFOs are all operating normally, i.e. none throws a timeout. I also tried several different orders of reading/writing to the FIFOs and different ways of ensuring this order (i.e. case strucutres, flat and stacked sequence). The FIFOs are also large enough to store the amount of data buffered no matter if I write or read first.
Thank you very much,
Bjorn
Attachments:
FPGA-code.png 61 KB
FPGA-output.png 45 KBDuring the last couple of days I tried the following:
1. Running the FPGA code on the development PC with simulated I/O. The behavior was normal, i.e. like I've intended the code to perform.
2. I tested the code on the development PC with the square and sine wave generation VI as 'simulated' I/O. The code performed normal.
3. I replaced the FIFOs with queues and ran my logic on the dev. PC. The logic performed totally normal.
4. Right now the code is compiling with constants as inputs like you suggested...
I am currently trying to get LabView 2013 on the development machine. It seems like my last real hope is that the issue is a bug in the XILINX 13.4 compiler tools and that the 14.4 tools will just make it disappear...
Nevertheless I am still open for suggestions. Some additional info about my FIFOs of concerne:
Buffer 1 and 2:
- Type: Target Scoped
- Elements Requested: 1023
- Implementation: Block Memory
- Control Logic: Target Optimal
- Data Type: U16
- Arbitrate for Read: Never Arbitrate
- No. Elements Per Read: 1
- Arbitrate for Write: Never Arbitrate
- No. Elements Per Write: 1
The inputs from the NI 5734 are U16 so I am wirering the right data type to the FIFOs. I also don't have any coercion dots within my FPGA VI. And so far it has only occured after the VI has been compiled onto the FPGA. Could some of the FIFOs/block memory be corrupted because we have written stuff onto the FPGA too often? -
Compressing data through URLConnection
I was looking into to the URLConnection and try to get a way to configure the connection(set my own sockets so that I can compress data going back and forth) similar to the way RMI handles this issue by providing clientSocketFactory, and serversocketFactory to UnicastRemoteObject. It seems there is no way to do that. I know I can specify URLStreamHandlerFactory but that does not seem to do what I am looking for.
I am looking for a way to control the underlying communication mechanism in which the connection I get from URL.openConnection() uses. That is possible if the API would provide a way to pass <mechanism>Factories to the URL. if any one has a solution to this please email it to me.
I will give you some code to see what I am talking about.
//Servlet
import java.io.*;
import javax.servlet.*;
import javax.servlet.http.*;
public class DataCruncherServlet extends HttpServlet {
public void init(ServletConfig config) throws ServletException {
super.init(config);
public void doGet(HttpServletRequest req, HttpServletResponse res)
throws ServletException, IOException {
doPost(req, res);
public void doPost(HttpServletRequest req, HttpServletResponse res)
throws ServletException, IOException {
ServletInputStream in = req.getInputStream();
InputStreamReader inr = new InputStreamReader(in);
StringBuffer sb = new StringBuffer();
char data[] = new char[1024];
while( inr.read(data) != -1 ) {
sb.append(new String(data));
OutputStream out = res.getOutputStream();
out.write(sb.toString().getBytes());
in.close();
inr.close();
out.close();
//For The client
import java.net.*;
import java.io.*;
public class DataCruncherClient {
public static void main(String[] args) {
URL fileURL = null;
URLConnection con = null;
StringBuffer buffer = null;
OutputStream out = null;
BufferedReader br = null;
InputStreamReader in = null;
try {
fileURL = new URL("http://localhost:8000/myContext/DataCruncherServlet");
//There is no way to control the underlying communication mechanism(sockets rmi, ...)
//connection we get uses.
con = fileURL.openConnection();
con.setDoOutput(true);
con.setDoInput(true);
out = con.getOutputStream();
br = new BufferedReader(new FileReader("test.txt"));
StringBuffer sb = new StringBuffer();
String line = null;
while( (line = br.readLine()) != null){
sb.append(line);
out.write(sb.toString().getBytes());
in = new InputStreamReader(con.getInputStream());
char data[] = new char[1024];
buffer = new StringBuffer();
while( in.read(data) != -1 ) {
buffer.append(new String(data));
catch(MalformedURLException me) {
System.out.println("MalFormed URLException: "+me.getMessage());
catch(FileNotFoundException fe) {
System.out.println("File Not Found: "+fe.getMessage());
catch(IOException ioex){
System.out.println("IOEXception: "+ioex.getMessage());
finally{
try {
out.close();
br.close();
in.close();
catch(IOException ioex) {
System.out.println("can not close stream: "+ioex.getMessage());
System.out.println("Returned from Servlet is: ");
System.out.println(buffer.toString());
email me :[email protected]s.append(char[]) and s.append(char[], int, int) are
similar each one is converted into string throught
String.valueOf(char[]) and String.valueOf(char[], int,
int) respectively then it is appended to the
s(stringBuffer) so I do not see why one is more
efficient Than the other, please explain.from the implementation of StringBuffer:
public synchronized StringBuffer append(char str[], int offset, int len) {
int newcount = count + len;
if (newcount > value.length)
expandCapacity(newcount);
System.arraycopy(str, offset, value, count, len);
count = newcount;
return this;
}where do you see conversion to String? typically you do multiple appends and then one StringBuffer.toString(). this is different from creating a String on every append.
Using Zip streams can fix the particular problem I
outlined but I was thinking about a way to control
the underlying communication mechanizim in which the
connection(URLConnection) depend on.so you wanted to hide the compression inside of URL.openStream()?
robert -
Data Guard as a pass through?
Scenario is . . .
Host A is the Primary
Host B is a Standby
Host C is a Standby
Now I know we can set up A->B and A->C
Can we set up A->B->C ?
Essentially using B as a pass through between A and C. Or you can see it as B being in a DMZ.
Would B have to be Active Data Guard or anything special?
I guess what I am really asking is can a Standby be used as the source for another Standby.See http://download.oracle.com/docs/cd/E11882_01/server.112/e10700/cascade_appx.htm#i638620 for more information on Cascaded Standby Destinations. There are a few restrictions:
Cascading has the following restrictions:
* Logical and snapshot standby databases cannot cascade primary database redo.
* SYNC destinations cannot cascade primary database redo in a Maximum Protection Data Guard configuration.
* Cascading is not supported in Data Guard configurations that contain an Oracle Real Applications Cluster (RAC) primary database.
* Cascading is not supported in Data Guard broker configurations.
Keep an eye on this chapter and Note 409013.1 "Cascaded Standby Databases" when the next patch set for 11.2 comes out :^)
Larry -
Does ODBC encrypts data while passing through the network?
Does ODBC encrypts data while passing through the network?
ODBC uses the underlying Oracle networking components to transmit data. By default, these components do not encrypt data, although they can be made to do so-- see the "SSL Encryption" thread from a few days ago.
Justin -
In the Encoder Tab of the Inspector Window, there is a pulldown menu to the right of the "Audio Settings..." Button. There are 3 choices in this pull-down menu: Enabled, Disabled, and Pass-through.
What does the Pass-through option mean?
My system:
Dual G5 2.3GHz
4.5 GB RAM
OS 10.4.2
FCP 5.0.2
QT Pro 7.0.3
Compressor 2.0.1
massachiSome of the transcoders allow you to pass the audio or video through without re-compressing or modifying the original state of the encoded data. That's what "Pass-through" does.
Thus, if your audio is already in the correct state (for example AAC) then pass-though will just take the original audio and add it (unmodified) to the output file. -
Not able to get the data from synchronous Webservice To BPEL process
Hi All,
My requirement is : Third party has some webservice.They are pushing data to that Webservice(Wsdl).
Third part WSDL example : http://ipaddress:port/name/Service.svc?wsdl ( This is just example format of their WSDL)
After that I need to get that data into my BPEL process and update my system.
When I built My Snchronous BPEL process I imported third party WSDL(http://ipaddress:port/name/Service.svc?wsdl) through 'import WSDL' in dialog.After that I automatically got the (request and response schema elements) parameters from that WSDL.I gave input and output of the BPEL process from those elements.
I pasted that third party URL iin SOAPUI and I got their operations and schemas.Based on that I had choosen the elements for 'input' and 'output' of the BPEL processes.I am also getting the schema structures in 'Assign' or 'Transform' activity.
I built the whole process.
I have the Process.
Now Client is pushing data to their WSDL(http://ipaddress:port/name/Service.svc?wsdl) as it is their data pushing interface.But that data is not coming to my BPEL process and instance is not being created in EM console.
As I have imported their WSDL into my BPEL process,I need to get the data.But I am not getting the data.
Is there any problem in MY BPEL process?
(or)
DO I need to use 'Webservice' Adapter in 'Exposed Services' Swimlane in Composite Editor to have the third party URL, so that they Can push the data to that WSDL in turn that data comes into my BPEL process?
Can anybody help me this case?
once again my requirement is :
Client pushes the data through their WSDL url -----> I need to get that data into MY BPEL process --> I have my own WSDl to take that details into my system.I will explain the requirement in small paragraph:
There are two applications.One is our application(X) and another one is third party application(Y).
I need to update in my application(X) based on data coming from application(Y).
I am using SOA as a middle tier to have communication between Y and X.
(Ex: if they send some info like event type 'event1' from Y ,I need to update that 'event1' data in my X application)
The work at third party application is :
According their info,They will push data from their end to their WSDL( http://ipaddress:port/name/Service.svc?wsdl ).
They are telling they can only send the data to their WSDL( http://ipaddress:port/name/Service.svc?wsdl ).
They will not consume our BPEL process(I think they might be wrong at this point of time).They have one WSDL to send or push the data from their end.
The work at from our side(SOA & X application)
From that point ,our BPEL process has to receive that data and update that data into my application(X).
I hope You understand my requirement.
Can you guide me through how to achieve this task as they are telling they have to use their WSDL to push the data?
(or)
Do I need to take 'Webservice' adapter into Exposed Services Swimlane in Jdeveloper to have their webservice(third party WSDL),If it is So Can you tell me the details how to take 'input' and 'output' for BPEL process?
(or)
Can YOu suggest me to talk to them to consume my BPEL process directly?
Thanks
Edited by: 899283 on Aug 17, 2012 4:55 AM -
USB Pass-Through From Windows 8.1 Host To Windows Server 2012 R2 VM
I want to be able to connect with a Windows Mobile Device through Windows Mobile Device Center, within a Virtual Machine. When connecting through the Hyper-V Manager and through Remote Desktop, under "Other supported RemoteFX USB devices",
I can see the Symbol USB Sync Cradle. In the VM, in Device Manager, I don't see a USB connection. In the VM, I don't see any meaningful errors in the Event Viewer.
Host: Windows 8.1 Enterprise Hyper-V on a Domain. Upgraded from Windows 8.1 Pro. When this computer was originally installed with Windows 8 Pro, Hyper-V was enabled. I removed Hyper-V, and installed VMWare Player, because I wanted
USB Pass-through. I then uninstalled VMWare and installed VirtualBox. Recently, I uninstalled VirtualBox, upgraded to Windows 8.1 Enterprise, and enabled Hyper-V.
Virtual Machine OS: Windows Server 2012 R2 on a Workgroup. Started out with being a VMWare VM, using VMWare Player. Moved to VirtualBox. USB Pass-through was working in both those virtual environments. Used Disk2VHD to convert the
VM to a VHDX file.
On the Host:
Windows Mobile Device Center is connected to a Motorola Windows Mobile Device (MC959X) sitting in a Symbol USB Cradle. The OS on the scanner is Windows Embedded Handheld 6.5 Classic CE OS 5.2.29217 (Build 29217.5.3.12.26). Advanced Networking
(USB to PC) is not enabled.
Enabled RemoteFX.
In the RDP file, and in the Registry, added the GUID's for:
WPD "{eec5ad98-8080-425f-922a-dabf3de3f69a}";
Windows Mobile "{6AC27878-A6FA-4155-BA85-F98F491D4F33}";
USB Device "{88BAE032-5A81-49f0-BC3D-A4FF138216D6}";
Windows CE USB Device "{25dbce51-6c8f-4a72-8a6d-b54c2b4fc835}";
GUID_DEVINTERFACE_USB_DEVICE "{A5DCBF10-6530-11D2-901F-00C04FB951ED}"
Ran "sfc /scannow"
All Microsoft Updates are current.
What am I missing?I hope it's something like that. Those features have been installed. Here's what PowerShell shows is installed:
PS C:\Windows\system32> Get-WindowsFeature |Where {$_.Installed -eq "True"} | ft DisplayName, Installed
DisplayName
Installed
File and Storage Services
True
File and iSCSI Services
True
File Server
True
Storage Services
True
Remote Desktop Services
True
Remote Desktop Licensing
True
Remote Desktop Session Host
True
Web Server (IIS)
True
Web Server
True
Common HTTP Features
True
Default Document
True
Directory Browsing
True
HTTP Errors
True
Static Content
True
HTTP Redirection
True
Health and Diagnostics
True
HTTP Logging
True
Performance
True
Static Content Compression
True
Security
True
Request Filtering
True
Windows Authentication
True
Application Development
True
.NET Extensibility 3.5
True
.NET Extensibility 4.5
True
ASP.NET 3.5
True
ASP.NET 4.5
True
ISAPI Extensions
True
ISAPI Filters
True
Management Tools
True
IIS Management Console
True
.NET Framework 3.5 Features
True
.NET Framework 3.5 (includes .NET 2.0 and 3.0)
True
.NET Framework 4.5 Features
True
.NET Framework 4.5
True
ASP.NET 4.5
True
WCF Services
True
TCP Port Sharing
True
Ink and Handwriting Services
True
Media Foundation
True
Remote Server Administration Tools
True
Role Administration Tools
True
Remote Desktop Services Tools
True
Remote Desktop Licensing Diagnoser Tools
True
Remote Desktop Licensing Tools
True
SMB 1.0/CIFS File Sharing Support
True
User Interfaces and Infrastructure
True
Graphical Management Tools and Infrastructure
True
Desktop Experience
True
Server Graphical Shell
True
Windows PowerShell
True
Windows PowerShell 4.0
True
Windows PowerShell 2.0 Engine
True
Windows PowerShell ISE
True
WoW64 Support
True -
DMS Document upload: does it pass through sap DMS ?
Dear All,
We have a question concerning the transmission of documents from the client to DMS and the Content Server: does the document need to pass through the sap server ?
Document upload (create document CV01N)
Does the document go directly from the client to the Content server OR does it pass through the sap DMS before to be stored in the CS ?
Document download (read document CV03N)
Does the document go directly from the CS to the client OR does it pass through the sap DMS server ?
This could be interesting to know for network performances.
best regards,Hi Gurus
is the cache server a default funtionality from the content server or any configuration is required from our part.
is that the cache server acts as the RAM of our system?
please explain the partitioning or biferfication of the Content server, as you told
content server is divided into storage catagories and this in turn in to content repositories,
please clarify below points,
1)any server or PC can be made as Content server by insatalling the content server CD if iam right ?
2) Practical and funtional benifits of partitioning content server into content repositories is it for authorization and storing data by naming convection or can it also help in copiying data from a specific content repository if needed, ( is content repositories a logical partition or practical partition like B,C,D, F drives of our PC hard disk)
3) can /should there be multiple content server installation for a particular (production) client.
4) Can Archiving be done say by creating a separate content repository inside the same Content server, or is it mandatory to have a separate archiving server itself,
please give some idea with examples
Thanks and regards
Kumar -
I wonder if anyone could please help. I have just bought AppleTV and am trying to use it with my old Samsung DLP (with no HDMI but DVI) and a Denon AVR 789 (witch has HDMI pass-through) . When I connect the AppleTV through the receiver (using a HDMI lead to HDMI, them HDMI to DVI) I get no picture. When I go directly from the AppleTV to the TV using HDMI to DVI it works fine. I have tried changing the cables and also when I connect my FIOS TV box through the receiver it works fine. Also my AppleTV software is up to date Could this be something to do with HDCP ?
I would be grateful of any helpYes it could be HDCP.
Generally, I would look at the source device (tv) as the root of the problem in that it doesn't handle HDCP hopping properly, however the tv works with other receivers so I have to have my doubts about where blame lies for these problems.
You could try powering up the devices in a different order, say from the delivery end first.
It could be however that you simply don't have the receiver set correctly and need to match up the inputs and outputs.
Maybe you are looking for
-
Netbeans not running under root user
I noticed that after installing Netbeans under user 'root', I'm able to use it under my user account, but it doesn't launch when I'm logged in as root. I'm running Arch x86_64 with XFCE in VirtualBox. Everything else works, including: Firefox Pidgi
-
what is the difference between xmldom and dbms_xmldom packages? Has anyone worked with dbms_xmldom? if so does it still uses JVM inside the database?
-
Hi Gurus I need to send emails to employees but that email will contain a table format and i am thinking to do that by using | (pipes). As table format is required, so i cant use only internal table, so can anyone please tell me how to do this. i hav
-
Hide the DELETE button in EditCurrentRecord !!!
Hi, Who can tell me how can I hide the DELETE button of the EditCurrentRecord ? Thank You !
-
My anti spyware keeps telling me my plugin container is logging keystrokes and I'm scared of getting keylogged == This happened == A few times a week == im not sure