You will no longer receive e-mail notifications from this forum.
Log-In to post
We appreciate your feedback. However, in order to resolve any issues that you may encounter, we would like you to provide us with as much information as possible, such as:
Please, try to answer these questions as best as you can when you create a new troubleshooting post. The more details you provide us, the faster we can resolve the issue.
Additionally, you can send us logs located in:
..\Users\*user*\AppData\Roaming\Dell\Toad for Apache Hadoop\*version*\log
..\Users\*user*\AppData\Roaming\Dell\Toad for Apache Hadoop\*version*\.metadata\.log.
/Users/[user]/Library/Containers/com.dell.ToadForApacheHadoop/Data/.dell/ Toad for Apache Hadoop/[version]/log/log4j.log
/Users/[user]/Library/Containers/com.dell.ToadForApacheHadoop/Data/.dell/ Toad for Apache Hadoop/[version]/.metadata/.log
Not able to upload/download file to/from an encrypted area on hdfs. Please suggest a resolution.
We are getting the following error message while uploading a file via toad.
We are getting the following error message
ERROR HDFSUploadHandler:288 - Error uploading HDFS file java.security.PrivilegedActionException: java.io.IOException: No KeyProvider is configured, cannot access an encrypted file
2017-10-09 18:58:54 ERROR HDFSUploadHandler:288 - Error uploading HDFS filejava.security.PrivilegedActionException: java.io.IOException: No KeyProvider is configured, cannot access an encrypted file2017-10-09 18:58:54 INFO DefaultExceptionParser:58 - java.security.PrivilegedActionException: java.io.IOException: No KeyProvider is configured, cannot access an encrypted file at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Unknown Source) at com.dell.tfh.control.service.HDFSService.copy(HDFSService.java:977) at com.dell.tfh.control.service.HDFSService.uploadFile(HDFSService.java:840) at com.dell.tfh.control.service.HDFSService.upload(HDFSService.java:893) at com.dell.tfh.gui.hdfs.handler.HDFSUploadHandler$1.runJob(HDFSUploadHandler.java:277) at com.dell.tfh.gui.commons.jobs.AbstractToadJob.run(AbstractToadJob.java:163) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)Caused by: java.io.IOException: No KeyProvider is configured, cannot access an encrypted file at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1411) at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1522) at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1507) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:408) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:401) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:401) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:344) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:920) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:901) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798) at com.dell.tfh.library.hadoop.cdh5.FileSystemCDH5.create(FileSystemCDH5.java:60) at com.dell.tfh.control.service.HDFSService$22.run(HDFSService.java:980) at com.dell.tfh.control.service.HDFSService$22.run(HDFSService.java:1) ... 8 more
Previously I used TOAD 1.5.1 and connecting via jdbc. Everything worked.Today I updated to TOAD version 1.5.3. Stopped working connecting via jdbc: The server failed to respond with a valid HTTP response.
Looked at the log:Caused by: org.apache.thrift.transport.TTransportException: Could not create http connection to jdbc:hive2://servername.net:10000/;transportMode=http;httpPath=param1;param2=true. org.apache.http.client.ClientProtocolException
The connection string should be: jdbc:hive2://servername.net:10000/param1;param2=true
Typed it in the tab "JDBC connection string" - all working again. Seem to version 1.5.3, there is a bug with the formation of the http request.
Ask to check.
I have installed Horton Works VM.
I downloaded the toad for hadoop plugin and I try to configure.
It is not connecting, and failing at Hive host. It is telling host is not reachable.
Where as Iam able to run the queries from Hive view from Ambari page.
Please help me. and also find the attached file.
Thanks in advance.Error_Connecting to Hive.docx
I'm trying to setup TOAD for Hadoop. We use Ambari and Hortonworks HDP 2.4. I enter the credentials and connection information information and it checks out everything until it gets to "Getting cluster configuration" and the status is "detecting...". It won't advance past that step.
I am new to using Toad for Hadoop. I have installed Toad for Hadoop 1.5.3 and Hadoop 2.6 is installed on my Ubuntu VM. I am trying to create a new ecosystem by manually doing the configuration. However, not able to proceed with this and getting stuck at the first step itself and it is not able to identify the namenode FQDN.
I have attached the error screenshots for reference.
Any help is highly appreciated.
Thanks in Advance!!
While connecting to CDH server using Toad I am getting this error.
HDFS configuration:An exception was caught.Class com.wandisco.fs.client.FusionHdfs not found
I am using CDH5.5 driver version and tried same with 5.8 both gives the same error.
All the details for Namenode and Kerberos are entered correctly. Please let us know if this is known defect.
I am getting below error while transferring data from Oracle to Hive
The table ban_errors has not been transfered:Cannot start transfer execution.UnknownHostException: HDFSNS
I am getting below error while transferring data from HIVE to Oracle.
The table CYCLE_STATE has not been transfered:An exception was caught.org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/admin/.tfah/SQOOP_EXPORT/20170821_173943_587/lib/ToadManager.jar could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
I love this tool but before I suggest it to my organization I wanted to ask if it is still in development. I noticed the last version came out last year.
This tool really stands out from a crowd of other SQL client. I was specifically looking for a client that can access logs and could not find one. Then I stumbled upon SO post and gave it a try. It is almost perfect as I can see map reduce logs but it looks like they are shown only once query is finished.
I love that I can check on jobs running and finished right from the tool. HDFS files capability and copy from oracle is pretty neat too.
thanks for such a great little tool!
Toad Hadoop Team,
I tried out the v1.1.8 and the SSL component is working great. Thanks!
During the New Ecosystem configuration, I got a red declamation point error in the wizard setup with the "Getting Cluster Configuration" step. We are using CDH 5.4.3. Here is the error below. There was no information in the Status or Notes fields in that wizard.
Am using 32 bit Windows OS can i have Toad for this version.
select * from t;
returns over 100000 rows. i click save as csv, waiting... waiting... wait...
then, i use wireshark to capture network traffic to see what's happening.
i am totally shocked! can you guess what i have seen? there is many many thrift request "FetchResults". fetching result one row by one row!
YOU ARE DELL INC. so i thinging it is not just a trivial performance issue for you big company, but a big bug! can you fix it?
I have downloaded Toad for Hadoop 1.5.3 ( for windows). However when adding a new ecosystem ( I use Cloudera manager), I get the error saying my CDH version 5.1 is not supported.
Is there any workaround for the same.
Hi My system configurations
Toad Data point version : 4.2system Ram : 6 GB Harddisk : 256 GBOS : windows 7I am trying to compare Hive table with XSD file with Data Diff Viewer
I have connected to hive database and Drag Drop employee table to Data Diff Viewer and From other side I have loaded XSD file The source is Hive table and target is XSD file.
In the above screen, Target Columns are not viewing ?
This my XSD Format
<?xml version="1.0" encoding="UTF-8"?><xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="www.w3.org/.../XMLSchema"> <xs:element name="DataTable" type="DataTableType"/> <xs:complexType name="DataTableType"> <xs:sequence> <xs:element type="employeeType" name="employee" maxOccurs="unbounded" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="employeeType"> <xs:sequence> <xs:element name="eid"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="1201"/> <xs:enumeration value="1202"/> <xs:enumeration value=""/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="name"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Gopal1"/> <xs:enumeration value="Manish2"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="salary"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="45000"/> <xs:enumeration value="40000"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="destination"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Technical manager"/> <xs:enumeration value="Proof reader"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element type="xs:string" name="address1"/> <xs:element type="xs:string" name="address2"/> </xs:sequence> </xs:complexType></xs:schema>
Is there any alternative to compare data between hive table and XSD file ? RegardsNarendra k
Running Toad for Apache Hadoop version 1.5.3 on Mac OSX Sierra
connecting to Cloudera 5.8 environment with 5 virtual data nodes
I have tried using FQDN and IP addresses
Quick configuration works as expected
Ecosystem Configuration works, check credentials works
use keytab option
Username and realm will be overridden by keytab, continue YES
JCE is active
When I get to the screen to the next screen
SQL, Charts, and Logs are Green
HDFS and Transfer are Red the following errors are thrown
A problem has occured. Server has invalid Kerberos principal: hdfs/c01nhvu123.nh.corp@HADOOP.NANTHEALTH.COM
when I retry
A problem has occured. Server has invalid Kerberos principal: hdfs/c01nhvu121.nh.corp@HADOOP.NANTHEALTH.COM
these are name nodes
In the toad logs I see errors like this:
!ENTRY com.dell.tfh.library.hadoop.cdh5.8 4 0 2017-06-26 10:34:44.179!MESSAGE FrameworkEvent ERROR!STACK 0java.io.IOException: Exception in opening zip file: /Users/teds/Library/Containers/com.dell.ToadForApacheHadoop/Data/.eclipse/415756717_macosx_cocoa_x86_64/configuration/org.eclipse.osgi/20/0/.cp/lib/sqoop-1.4.6-cdh5.8.0.jar
Caused by: java.io.FileNotFoundException: /Users/teds/Library/Containers/com.dell.ToadForApacheHadoop/Data/.eclipse/415756717_macosx_cocoa_x86_64/configuration/org.eclipse.osgi/20/0/.cp/lib/sqoop-1.4.6-cdh5.8.0.jar (No such file or directory)
I just install Toad for Hadoop. How to connect to HUE browser in Microsoft Azure cloud
We have HUE browser URL, username, and password
Toad for Hadoop is taking High CPU usage and Memory which is causing to not return data after few queries executed.
Looks like there is some background process is taking lot of CPU & RAM
Only work around I'm using is restarting Toad after few queries executed!